Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 473/527 | < Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >

  • do the Python libraries have a natural dependence on the global namespace?

    - by msw
    I first ran into this when trying to determine the relative performance of two generators: t = timeit.repeat('g.get()', setup='g = my_generator()') So I dug into the timeit module and found that the setup and statement are evaluated with their own private, initially empty namespaces so naturally the binding of g never becomes accessible to the g.get() statement. The obvious solution is to wrap them into a class, thus adding to the global namespace. I bumped into this again when attempting, in another project, to use the multiprocessing module to divide a task among workers. I even bundled everything nicely into a class but unfortunately the call pool.apply_async(runmc, arg) fails with a PicklingError because buried inside the work object that runmc instantiates is (effectively) an assignment: self.predicate = lambda x, y: x > y so the whole object can't be (understandably) pickled and whereas: def foo(x, y): return x > y pickle.dumps(foo) is fine, the sequence bar = lambda x, y: x > y yields True from callable(bar) and from type(bar), but it Can't pickle <function <lambda> at 0xb759b764>: it's not found as __main__.<lambda>. I've given only code fragments because I can easily fix these cases by merely pulling them out into module or object level defs. The bug here appears to be in my understanding of the semantics of namespace use in general. If the nature of the language requires that I create more def statements I'll happily do so; I fear that I'm missing an essential concept though. Why is there such a strong reliance on the global namespace? Or, what am I failing to understand? Namespaces are one honking great idea -- let's do more of those!

    Read the article

  • Generating jquery 'rules' from business model to UI in asp.net mvc

    - by jim
    Hi all, I've had a good look around and am certain that there's no matching question on SO, so here goes. Has anyone created a 'helper' method on their model that generates jquery (or plain javascript) rules validation dynamically, based on the criteria/rules that are contained within the object and taken from a repository (i.e. DB). What i'm thinking of is a discrete set of partial views (and associated models) that have rules at the business logic 'level' and rather than (or in combination with) validating the rule(s) at postback, translating the same rules into tightly focussed jquery methods that work identically at client (js) and server (c#) levels. I can see benefits here re performance. Also, the rules definitions could be created in a single place (in c#) and the jquery generated off of that, thus allowing single edits to update both code streams. I appreciate that there would be limitations imposed by language specific contstraints but the general principle could be quite interesting if used appropriately. I'm also aware that testibility could be an issue when using two different language structures and hoping to achieve similar test outcomes - but those aside... any thoughts or experiences of similar out there?? cheers jimi

    Read the article

  • Sql Server 2005 multiple insert with c#

    - by bottlenecked
    Hello. I have a class named Entry declared like this: class Entry{ string Id {get;set;} string Name {get;set;} } and then a method that will accept multiple such Entry objects for insertion into the database using ADO.NET: static void InsertEntries(IEnumerable<Entry> entries){ //build a SqlCommand object using(SqlCommand cmd = new SqlCommand()){ ... const string refcmdText = "INSERT INTO Entries (id, name) VALUES (@id{0},@name{0});"; int count = 0; string query = string.Empty; //build a large query foreach(var entry in entries){ query += string.Format(refcmdText, count); cmd.Parameters.AddWithValue(string.Format("@id{0}",count), entry.Id); cmd.Parameters.AddWithValue(string.Format("@name{0}",count), entry.Name); count++; } cmd.CommandText=query; //and then execute the command ... } } And my question is this: should I keep using the above way of sending multiple insert statements (build a giant string of insert statements and their parameters and send it over the network), or should I keep an open connection and send a single insert statement for each Entry like this: using(SqlCommand cmd = new SqlCommand(){ using(SqlConnection conn = new SqlConnection(){ //assign connection string and open connection ... cmd.Connection = conn; foreach(var entry in entries){ cmd.CommandText= "INSERT INTO Entries (id, name) VALUES (@id,@name);"; cmd.Parameters.AddWithValue("@id", entry.Id); cmd.Parameters.AddWithValue("@name", entry.Name); cmd.ExecuteNonQuery(); } } } What do you think? Will there be a performance difference in the Sql Server between the two? Are there any other consequences I should be aware of? Thank you for your time!

    Read the article

  • Round date to 10 minutes interval

    - by Peter Lang
    I have a DATE column that I want to round to the next-lower 10 minute interval in a query (see example below). I managed to do it by truncating the seconds and then subtracting the last digit of minutes. WITH test_data AS ( SELECT TO_DATE('2010-01-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:05:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:09:59', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:10:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2099-01-01 10:00:33', 'YYYY-MM-DD HH24:MI:SS') d FROM dual ) -- #end of test-data SELECT d, TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data And here is the result: 01.01.2010 10:00:00    01.01.2010 10:00:00 01.01.2010 10:05:00    01.01.2010 10:00:00 01.01.2010 10:09:59    01.01.2010 10:00:00 01.01.2010 10:10:00    01.01.2010 10:10:00 01.01.2099 10:00:33    01.01.2099 10:00:00 Works as expected, but is there a better way? EDIT: I was curious about performance, so I did the following test with 500.000 rows and (not really) random dates. I am going to add the results as comments to the provided solutions. DECLARE t TIMESTAMP := SYSTIMESTAMP; BEGIN FOR i IN ( WITH test_data AS ( SELECT SYSDATE + ROWNUM / 5000 d FROM dual CONNECT BY ROWNUM <= 500000 ) SELECT TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data ) LOOP NULL; END LOOP; dbms_output.put_line( SYSTIMESTAMP - t ); END; This approach took 03.24 s.

    Read the article

  • The correct usage of nested #pragma omp for directives

    - by GoldenLee
    The following code runs like a charm before OpenMP parallelization was applied. In fact, the following code was in a state of endless loop! I'm sure that's result from my incorrect use to the OpenMP directives. Would you please show me the correct way? Thank you very much. #pragma omp parallel for for (int nY = nYTop; nY <= nYBottom; nY++) { for (int nX = nXLeft; nX <= nXRight; nX++) { // Use look-up table for performance dLon = theApp.m_LonLatLUT.LonGrid()[nY][nX] + m_FavoriteSVISSRParams.m_dNadirLon; dLat = theApp.m_LonLatLUT.LatGrid()[nY][nX]; // If you don't want to use longitude/latitude look-up table, uncomment the following line //NOMGeoLocate.XYToGEO(dLon, dLat, nX, nY); if (dLon > 180 || dLat > 180) { continue; } if (Navigation.GeoToXY(dX, dY, dLon, dLat, 0) > 0) { continue; } // Skip void data scanline dY = dY - nScanlineOffset; // Compute coefficients as well as its four neighboring points' values nX1 = int(dX); nX2 = nX1 + 1; nY1 = int(dY); nY2 = nY1 + 1; dCx = dX - nX1; dCy = dY - nY1; dP1 = pIRChannelData->operator [](nY1)[nX1]; dP2 = pIRChannelData->operator [](nY1)[nX2]; dP3 = pIRChannelData->operator [](nY2)[nX1]; dP4 = pIRChannelData->operator [](nY2)[nX2]; // Bilinear interpolation usNomDataBlock[nY][nX] = (unsigned short)BilinearInterpolation(dCx, dCy, dP1, dP2, dP3, dP4); } }

    Read the article

  • I'm annoyed with asp .net mvc action links? Is there something better in MVC3?

    - by Jonathon Kresner
    After almost 3 years with mvc I'm scratching my head. Is it just me, or does the way we specify links in asp .net mvc suck? @Html.ActionLink("Log Off", "LogOff", "Account") In the previews for mvc 1 we had the funky generic action links which gave us intellisense and compile checking, which I LOVED. I know they removed them because of performance issues and because you could not actually guarantee that the route would resolve all the time... However the default way of doing it just doesn't make me feel safe enough in a big application. I've also used T4Mvc with MVC2, to be honest, I didn't really like it. It's not part of the Mvc framework and frustrating to develop with especially with source control in big teams and continuous integration builds. I guess I could also import Mvc Futures and keep using the generic types (it's probably what I'll do). I'm just about to start a very big project and was wondering what other people are thinking? Is anyone else annoyed with the options or has a new solution? It seems like ActionLinks are the most basic & frequently used feature. Shouldn't there be a good out of the box solution, we're just about to hit revision 3 of this framework.

    Read the article

  • Chicken and egg problem (restore database) when trying to write unit test against SQl Server 2008.

    - by Hamish Grubijan
    Ok, they are not unit tests but end-to-end tests. The setup is somewhat involved. Unit tests will use C#, ODBC connection. Every unit tests will try to clean up after itself, but every 20 tests or so (once per C# class) we would need to do a full database restore. I do not think I can do it over an ODBC connection, according to this document: http://www.sql-server-performance.com/articles/dba/Obtain_Exclusive_Access_to_Restore_SQL_Server_p1.aspx Msg 6104, Level 16, State 1, Line 1 Cannot use KILL to kill your own process. However, I would like to, so that 199 tests do not go amok because of a bad clean-up. Is there another way? Perhaps I can open a different "connection" such as use COM automation or something of that sort, and then kill all database connections from there? If so, how can I do that? Also, will the clients be able to re-connect automatically after a restore, or would I have to dismantle everything once every 20 tests or so? If you find this question confusing, please let me know what your questions are. Thanks!

    Read the article

  • How to get a debug flow of execution in C++

    - by Rich
    Hi, I work on a global trading system which supports many users. Each user can book,amend,edit,delete trades. The system is regulated by a central deal capture service. The deal capture service informs all the user of any updates that occur. The problem comes when we have crashes, as the production environment is impossible to re-create on a test system, I have to rely on crash dumps and log files. However this doesn't tell me what the user has been doing. I'd like a system that would (at the time of crashing) dump out a history of what the user has been doing. Anything that I add has to go into the live environment so it can't impact performance too much. Ideas wise I was thinking of a MACRO at the top of each function which acted like a stack trace (only I could supply additional user information, like trade id's, user dialog choices, etc ..) The system would record stack traces (on a per thread basis) and keep a history in a cyclic buffer (varying in size, depending on how much history you wanted to capture). Then on crash, I could dump this history stack. I'd really like to hear if anyone has a better solution, or if anyone knows of an existing framework? Thanks Rich

    Read the article

  • Database layout for an application with geocoding features using geokit

    - by vooD
    I'm developing a real estate web catalogue and want to geocode every ad using geokit gem. My question is what would be the best database layout from the performance point if i want to make search by country, city of the selected country, administrative area or nearest metro station of the selected city. Available countries, cities, administrative areas and metro sations should be defined by the administrator of catalogue and must be validated by geocoding. I came up with single table: create_table "geo_locations", :force => true do |t| t.integer "geo_location_id" #parent geo location (ex. country is parent geo location of city t.string "country", :null => false #necessary for any geo location t.string "city", #not null for city geo location and it's children t.string "administrative_area" #not null for administrative_area geo location and it's children t.string "thoroughfare_name" #not null for metro station or street name geo location and it's children t.string "premise_number" #house number t.float "lng", :null => false t.float "lat", :null => false t.float "bound_sw_lat", :null => false t.float "bound_sw_lng", :null => false t.float "bound_ne_lat", :null => false t.float "bound_ne_lng", :null => false t.integer "mappable_id" t.string "mappable_type" t.string "type" #country, city, administrative area, metro station or address end Final geo location is address it contains all neccessary information to put marker of the real estate ad on the map. But i'm still stuck on search functionality. Any help would be highly appreciated.

    Read the article

  • What's so bad about building XML with string concatenation?

    - by wsanville
    In the thread What’s your favorite “programmer ignorance” pet peeve?, the following answer appears, with a large amount of upvotes: Programmers who build XML using string concatenation. My question is, why is building XML via string concatenation (such as a StringBuilder in C#) bad? I've done this several times in the past, as it's sometimes the quickest way for me to get from point A to point B when to comes to the data structures/objects I'm working with. So far, I have come up with a few reasons why this isn't the greatest approach, but is there something I'm overlooking? Why should this be avoided? Probably the biggest reason I can think of is you need to escape your strings manually, and most programmers will forget this. It will work great for them when they test it, but then "randomly" their apps will fail when someone throws an & symbol in their input somewhere. Ok, I'll buy this, but it's really easy to prevent the problem (SecurityElement.Escape to name one). When I do this, I usually omit the XML declaration (i.e. <?xml version="1.0"?>). Is this harmful? Performance penalties? If you stick with proper string concatenation (i.e. StringBuilder), is this anything to be concerned about? Presumably, a class like XmlWriter will also need to do a bit of string manipulation... There are more elegant ways of generating XML, such as using XmlSerializer to automatically serialize/deserialize your classes. Ok sure, I agree. C# has a ton of useful classes for this, but sometimes I don't want to make a class for something really quick, like writing out a log file or something. Is this just me being lazy? If I am doing something "real" this is my preferred approach for dealing w/ XML.

    Read the article

  • Books for Computer Networking

    - by Altimet Gaandu
    Hi, I am a student of computer engineering from Vasula University, Somalia. We have a subject called Advanced Computer Networks and the following is the list of recommended books: Text Books: 1. B. A. Forouzan, "TCP/IP Protocol Suite", Tata McGraw Hill edition, Third Edition. 2. N. Olifer, V. Olifer, "Computer Networks: Principles, Technologies and Protocols for Network design", Wiley India Edition, First edition. References: 1. W.Richard Stevens, "TCP/IP Volume1, 2, 3", Addison Wesley. 2. D.E.Comer,"TCP/IPVolumeI and II", PearsonEducation. . 3.W.R. Stevens, "Unix Network Programming", Vol. 1, Pearson Education. 4. J.Walrand, P. Var~fya, "High Performance Communication Networks", Morgan Kaufmann. . 5. A.S.Tanenbaum,"Computer Networks", Pearson Education, Fourth Edition. But we have been unable to find these either in the market or on the internet (read: torrents). Please provide download links to any of these books and oblige. Thanks.

    Read the article

  • Is there an efficient way in LINQ to use a contains match if and only if there is no exact match?

    - by Peter
    I have an application where I am taking a large number of 'product names' input by a user and retrieving some information about each product. The problem is, the user may input a partial name or even a wrong name, so I want to return the closest matches for further selection. Essentially if product name A exactly matches a record, return that, otherwise return any contains matches. Otherwise return null. I have done this with three separate statements, and I was wondering if there was a more efficient way to do this. I am using LINQ to EF, but I materialize the products to a list first for performance reasons. productNames is a List of product names (input by the user). products is a List of product 'records' var directMatches = (from s in productNames join p in products on s.ToLower() equals p.name.ToLower() into result from r in result.DefaultIfEmpty() select new {Key = s, Product = r}); var containsMatches = (from d in directMatches from p in products where d.Product == null && p.name.ToLower().Contains(d.Key) select new { d.Key, Product = p }); var matches = from d in directMatches join c in containsMatches on d.Key equals c.Key into result from r in result.DefaultIfEmpty() select new {d.Key, Product = d.Product ?? (r != null ? r.Product: null) };

    Read the article

  • java.awt.Robot.keyPress for continuous keystrokes

    - by Deb
    So, here's my problem. I have a java program which will send keystroke messages to a game (built in Unity), based on how the user interacts with an android phone. (My java program is a listener for the android interaction over wi-fi) Now, in order to do this, I am using java.awt.Robot to send keyPresses to the game window. I have the following code block written in my listener program: if(interacting) { Robot robot = new Robot(); robot.keyPress(VK_A); robot.delay(20); //to simulate the normal keyboard rate } Now the variable interacting will be true as long as the user presses down on the touch screen of the phone, and what I intend to achieve is a continuous chain of keystroke messages being delivered to the game (through the listener). However, this is severely affecting performance, for some reason. I am noticing that the game becomes slow (rapidly dropping frame rates), and even the computer becomes slow, in general. What's going wrong? Should I use a robot.keyRelease(VK_A) after each keyPress? But my game has a different action mapped to the release of a key, and I do not want rapid key presses and releases; what I really want is to simulate continuous keystrokes, in exactly the way it would behave if the user were pressing down the A key on their keyboard manually. Please help.

    Read the article

  • Get the first and last posts in a thread

    - by Grampa
    I am trying to code a forum website and I want to display a list of threads. Each thread should be accompanied by info about the first post (the "head" of the thread) as well as the last. My current database structure is the following: threads table: id - int, PK, not NULL, auto-increment name - varchar(255) posts table: id - int, PK, not NULL, auto-increment thread_id - FK for threads The tables have other fields as well, but they are not relevant for the query. I am interested in querying threads and somehow JOINing with posts so that I obtain both the first and last post for each thread in a single query (with no subqueries). So far I am able to do it using multiple queries, and I have defined the first post as being: SELECT * FROM threads t LEFT JOIN posts p ON t.id = p.thread_id ORDER BY p.id LIMIT 0, 1 The last post is pretty much the same except for ORDER BY id DESC. Now, I could select multiple threads with their first or last posts, by doing: SELECT * FROM threads t LEFT JOIN posts p ON t.id = p.thread_id ORDER BY p.id GROUP BY t.id But of course I can't get both at once, since I would need to sort both ASC and DESC at the same time. What is the solution here? Is it even possible to use a single query? Is there any way I could change the structure of my tables to facilitate this? If this is not doable, then what tips could you give me to improve the query performance in this particular situation?

    Read the article

  • ASP.NET: Page HTML head rendering

    - by Fabian
    I've been trying to figure out the best way to custom render the <head> element of a pag to get rid of the extra line breaks which is caused by <head runat="server">, so its properly formatted. So far the only thing i've found which works is the following: protected override void Render(HtmlTextWriter writer) { StringWriter stringWriter = new StringWriter(); HtmlTextWriter htmlTextWriter = new HtmlTextWriter(stringWriter); base.Render(htmlTextWriter); htmlTextWriter.Close(); string html = stringWriter.ToString(); string newHTML = html.Replace("\r\n\r\n<!DOCTYPE", "<!DOCTYPE") .Replace("\r\n<html", "<html") .Replace("<title>\r\n\t", "<title>") .Replace("\r\n</title>", "</title>") .Replace("</head>", "\n</head>"); writer.Write(newHTML); } I define my had tag like Now i have 2 questions: How does the above code affect the performance (so is this viable in an production environment)? Is there a better way to do this, for example a method which i can override to just custom render the <head>? Oh yeah ASP.NET MVC is not an option.

    Read the article

  • Obfuscator for .NET assembly (Maybe just a C++ obfuscator?)

    - by Pirate for Profit
    The software company I work for is using a ton of open source LGPL/BSD/MIT C++ code that we have written wrappers around to port "helper classes" into a .NET assembly, via C++/CLI. These libraries have wrapped old cryptic APIs into easy-to-use ones based on common sense, and will be very helpful for a lot of different tasks will be included in many future client's applications, and we might even license it to other software companies in the same field. So naturally we are tasked with looking into solutions for securing the code from prying eyes. What we're trying to do is stop the casual observer from seeing what's going on. Now I have hacked some crazy shit in EverQuest and other video games in my day so I know with enough tireless effort anything can be done. But we don't want to make it easy for whomever. To the point, besides the Visual Studio compiler's optimizations, is there's a C++ obfuscator or .NET assembly obfuscator (after it's been built o.O) or something that would scramble everything up, re-arrange data structures, string constants, etc. idk? And if such a thing exists, we'd be curious to know how that would impact performance, as some sections of code are time critical (funny saying that using a managed M$ framework).

    Read the article

  • Is locking on the requested object a bad idea?

    - by Quick Joe Smith
    Most advice on thread safety involves some variation of the following pattern: public class Thing { private static readonly object padlock = new object(); private string stuff, andNonsense; public string Stuff { get { lock (Thing.padlock) { if (this.stuff == null) this.stuff = "Threadsafe!"; } return this.stuff; } } public string AndNonsense { get { lock (Thing.padlock) { if (this.andNonsense == null) this.andNonsense = "Also threadsafe!"; } return this.andNonsense; } } // Rest of class... } In cases where the get operations are expensive and unrelated, a single locking object is unsuitable because a call to Stuff would block all calls to AndNonsense, degrading performance. And rather than create a lock object for each call, wouldn't it be better to acquire the lock on the member itself (assuming it is not something that implements SyncRoot or somesuch for that purpose? For example: public string Stuff { get { lock (this.stuff) { // Pretend that this is a very expensive operation. if (this.stuff == null) this.stuff = "Still threadsafe and good?"; } return this.stuff; } } Strangely, I have never seen this approach recommended or warned against. Am I missing something obvious?

    Read the article

  • Android opengl releasing textures

    - by user1642418
    I have a bit of a problem. I am developing a game for android + engine and I got stuck. I am getting OpenGL out of memory error and either app crashes or phone hangs after loading a scene multiple times. For example: app launches, shows main menu, 1st level/scene is loaded. Then I go back to main menu, and repeat. It doesnt matter which scene I load, after 4-6 times the error occurs. Some background: Each time when scene is loaded all the resources are released and upon first frame render - needed stuff gets loaded. The performance is more or less ok. Note that I am calling glDeleteTexture method, but I think its not doing its job and releasing memory. Thing is that -when I minimize and open it again - problem doesn't occur, but almost the same things are executed. Problem doesn't occur. This way android releases memory. How do I release/get rid of unused textures properly? This happens on HTC Desire HD ( ice cream sandwich 4.0.4) . Other games works fine, so I bet this is not the problem in ROM.

    Read the article

  • Hibernate: fetching multiple bags efficiently

    - by Jens Jansson
    Hi! I'm developing a multilingual application. For this reason many objects have in their name and description fields collections of something I call LocalizedStrings instead of plain strings. Every LocalizedString is basically a pair of a locale and a string localized to that locale. Let's take an example an entity, let's say a book -object. public class Book{ @OneToMany private List<LocalizedString> names; @OneToMany private List<LocalizedString> description; //and so on... } When a user asks for a list of books, it does a query to get all the books, fetches the name and description of every book in the locale the user has selected to run the app in, and displays it back to the user. This works but it is a major performance issue. For the moment hibernate makes one query to fetch all the books, and after that it goes through every single object and asks hibernate for the localized strings for that specific object, resulting in a "n+1 select problem". Fetching a list of 50 entities produces about 6000 rows of sql commands in my server log. I tried making the collections eager but that lead me to the "cannot simultaneously fetch multiple bags"-issue. Then I tried setting the fetch strategy on the collections to subselect, hoping that it would do one query for all books, and after that do one query that fetches all LocalizedStrings for all the books. Subselects didn't work in this case how i would have hoped and it basically just did exactly the same as my first case. I'm starting to run out of ideas on how to optimize this. So in short, what fetching strategy alternatives are there when you are fetching a collection and every element in that collection has one or multiple collections in itself, which has to be fetch simultaneously.

    Read the article

  • Grouping Categorized Data In WPF.

    - by VoidDweller
    Here is what I am trying to do. Dynamic Category: Columns can be 0 or more. Must contain 1 or more Type Columns. Will only be displayed if any row contains Type Column data associated with it. Data Rows: Will be added Asynchronously. Will be grouped by a Common Category column. Will add a Dynamic Category if it does not yet exist. Will add a Type Column if it does not yet exist within its appropriate Dynamic Category. Platform Info: WPF .Net 3.5 sp1 C# MVVM I have a few partially functional prototypes, but each has it's own major set of problems. Can any of you give me some guidance on this? Envision this nicely styled. :-) -------------------------------------------------------------------------- |[ Common Category ]|[ Dynamic Category 0 ]|[ Dynamic Category N ]| -------------------------------------------------------------------------- |[Header 1]|[Header 2]|[ Type 0 ]|[ Type N ]|[ Type 0 ]|[ Type N ]| -------------------------------------------------------------------------- |[Data 2 Group] | -------------------------------------------------------------------------- | Data A | Data 2 || Null | Data 1 || Data 0 | Data 1 || | Data B | Data 2 || Data 0 | Null || Data 0 | Data 1 || -------------------------------------------------------------------------- |[Data 1 Group] | -------------------------------------------------------------------------- | Data C | Data 1 || Null | Data 1 || Data 0 | Data 1 || | Data D | Data 1 || Null | Null || Data 0 | Null || -------------------------------------------------------------------------- Edit: Sorting and Paging is not necessary. I have looked at nested ListViews and DataGrids, dynamically building a Grid. Dynamically building a Grid and leveraging the SharedSizeGroup property seems the most promising strategy, but I am concerned about performance. Would a better approach be to consider this a dynamic report? If so, what should I be looking at? Thanks for your help.

    Read the article

  • best web database solution for scala for a high traffic site?

    - by egervari
    I am in charge of a rebuilding a website that gets about 250,000 visitors a day. We'd like to use Scala, but it does not work very well with Spring (in some minor cases) and Hibernate (there is a major and very annoying mismatch here if you want to use scala collections, which we do). The application itself is going to have about 40-50 tables. Other than Hibernate, is there an ORM that works awesome with Scala and is as performant and reliable as Hibernate? Does it also have the same capabilities, or are we going to run into leaky-abstractions if we don't use Hibernate? It would be a big risk for us to go with a framework that is newer and doesn't seem to have a lot of industry backing... and at the same time, Hibernate is a real pain to program against when using Scala. 1) The Java Collection <- Scala Collection is absolutely painful. There is a lot more boilerplate and crap to write. 2) The IDE doesn't import JavaConversions and java interfaces automatically... so we this needs to be done manually. Optimizing Imports in IDEA is going to destroy all the manual work. 3) There is also a performance cost to converting back and forth all the time in your domain objects and your dao classes. 4) Not to mention there needs to be a lot of casting, which produces code ugly as sin. I actually would love to write my own orm that is 100% tailored to scala, but obviously this is really outside of the scope of our project for now. So what is the best approach?

    Read the article

  • Approach for caching data from data logger

    - by filip-fku
    Greetings, I've been working on a C#.NET app that interacts with a data logger. The user can query and obtain logs for a specified time period, and view plots of the data. Typically a new data log is created every minute and stores a measurement for a few parameters. To get meaningful information out of the logger, a reasonable number of logs need to be acquired - data for at least a few days. The hardware interface is a UART to USB module on the device, which restricts transfers to a maximum of about 30 logs/second. This becomes quite slow when reading in the data acquired over a number of days/weeks. What I would like to do is improve the perceived performance for the user. I realize that with the hardware speed limitation the user will have to wait for the full download cycle at least the first time they acquire a larger set of data. My goal is to cache all data seen by the app, so that it can be obtained faster if ever requested again. The approach I have been considering is to use a light database, like SqlServerCe, that can store the data logs as they are received. I am then hoping to first search the cache prior to querying a device for logs. The cache would be updated with any logs obtained by the request that were not already cached. Finally my question - would you consider this to be a good approach? Are there any better alternatives you can think of? I've tried to search SO and Google for reinforcement of the idea, but I mostly run into discussions of web request/content caching. Thanks for any feedback!

    Read the article

  • C++ BigInt multiplication conceptual problem

    - by Kapo
    I'm building a small BigInt library in C++ for use in my programming language. The structure is like the following: short digits[ 1000 ]; int len; I have a function that converts a string into a bigint by splitting it up into single chars and putting them into digits. The numbers in digits are all reversed, so the number 123 would look like the following: digits[0]=3 digits[1]=3 digits[2]=1 I have already managed to code the adding function, which works perfectly. It works somewhat like this: overflow = 0 for i ++ until length of both numbers exceeded: add numberA[ i ] to numberB[ i ] add overflow to the result set overflow to 0 if the result is bigger than 10: substract 10 from the result overflow = 1 put the result into numberReturn[ i ] (Overflow is in this case what happens when I add 1 to 9: Substract 10 from 10, add 1 to overflow, overflow gets added to the next digit) So think of how two numbers are stored, like those: 0 | 1 | 2 --------- A 2 - - B 0 0 1 The above represents the digits of the bigints 2 (A) and 100 (B). - means uninitialized digits, they aren't accessed. So adding the above number works fine: start at 0, add 2 + 0, go to 1, add 0, go to 2, add 1 But: When I want to do multiplication with the above structure, my program ends up doing the following: Start at 0, multiply 2 with 0 (eek), go to 1, ... So it is obvious that, for multiplication, I have to get an order like this: 0 | 1 | 2 --------- A - - 2 B 0 0 1 Then, everything would be clear: Start at 0, multiply 0 with 0, go to 1, multiply 0 with 0, go to 2, multiply 1 with 2 How can I manage to get digits into the correct form for multiplication? I don't want to do any array moving/flipping - I need performance!

    Read the article

  • Android CursorAdapters, ListViews and background threads

    - by MattC
    This application I've been working on has databases with multiple megabytes of data to sift through. A lot of the activities are just ListViews descending through various levels of data within the databases until we reach "documents", which is just HTML to be pulled from the DB(s) and displayed on the phone. The issue I am having is that some of these activities need to have the ability to search through the databases by capturing keystrokes and re-running the query with a "like %blah%" in it. This works reasonably quickly except when the user is first loading the data and when the user first enters a keystroke. I am using a ResourceCursorAdapter and I am generating the cursor in a background thread, but in order to do a listAdapter.changeCursor(), I have to use a Handler to post it to the main UI thread. This particular call is then freezing the UI thread just long enough to bring up the dreaded ANR dialog. I'm curious how I can offload this to a background thread totally so the user interface remains responsive and we don't have ANR dialogs popping up. Just for full disclosure, I was originally returning an ArrayList of custom model objects and using an ArrayAdapter, but (understandably) the customer pointed out it was bad memory-manangement and I wasn't happy with the performance anyways. I'd really like to avoid a solution where I'm generating huge lists of objects and then doing a listAdapter.notifyDataSetChanged/Invalidated() Here is the code in question: private Runnable filterDrugListRunnable = new Runnable() { public void run() { if (filterLock.tryLock() == false) return; cur = ActivityUtils.getIndexItemCursor(DrugListActivity.this); if (cur == null || forceRefresh == true) { cur = docDb.getItemCursor(selectedIndex.getIndexId(), filter); ActivityUtils.setIndexItemCursor(DrugListActivity.this, cur); forceRefresh = false; } updateHandler.post(new Runnable() { public void run() { listAdapter.changeCursor(cur); } }); filterLock.unlock(); updateHandler.post(hideProgressRunnable); updateHandler.post(updateListRunnable); } };

    Read the article

  • Problem with incomplete input when using Attoparsec

    - by Dan Dyer
    I am converting some functioning Haskell code that uses Parsec to instead use Attoparsec in the hope of getting better performance. I have made the changes and everything compiles but my parser does not work correctly. I am parsing a file that consists of various record types, one per line. Each of my individual functions for parsing a record or comment works correctly but when I try to write a function to compile a sequence of records the parser always returns a partial result because it is expecting more input. These are the two main variations that I've tried. Both have the same problem. items :: Parser [Item] items = sepBy (comment <|> recordType1 <|> recordType2) endOfLine For this second one I changed the record/comment parsers to consume the end-of-line characters. items :: Parser [Item] items = manyTill (comment <|> recordType1 <|> recordType2) endOfInput Is there anything wrong with my approach? Is there some other way to achieve what I am attempting?

    Read the article

< Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >