Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 538/594 | < Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >

  • Finding What You Need in R: function arguments/parameters from outside the function's package

    - by doug
    Often in R, there are a dozen functions scattered across as many packages--all of which have the same purpose but of course differ in accuracy, performance, theoretical rigor, and so on. How do you gather all of these in one place before you start your task? So for instance: the generic plot function. Setting secondary ticks is much easier (IMHO) using a function outside of the base package, minor.tick(nx=n, ny=n, tick.ratio=n), found in Hmisc. Of course, that doesn't show up in plot's docstring. Likewise, the data-input arguments to 'plot' can be supplied by an object returned from the function 'hexbin', again, from a library outside of the base installation (where 'plot' resides). What would be great obviously is a programmatic way to gather these function arguments from the various libraries and put them in a single namespace. edit: (trying to re-state my example just above more clearly:) the arguments to plot supplied in the base package for, e.g., setting the axis tick frequency are xaxp/yaxp; however, one can also set a/t/f via a function outside of the base package, again, as in the minor.tick function from the Hmisc package--but you wouldn't know that just from looking at the plot method signature. Is there a meta function in R for this? So far, as i come across them, i've been manually gathering them in a TextMate 'snippet' (along with the attendant library imports). This isn't that difficult or time consuming, but i can only update my snippet as i find out about these additional arguments/parameters. Is there a canonical R way to do this, or at least an easier way? Just in case that wasn't clear, i am not talking about the case where multiple packages provide functions directed to the same statistic or view (e.g., 'boxplot' in the base package; 'boxplot.matrix' in gplots; and 'bplots' in Rlab). What i am talking is the case in which the function name is the same across two or more packages.

    Read the article

  • Using array of Action() in a lambda expression

    - by Sean87
    I want to do some performance measurement for a method that does some work with int arrays, so I wrote the following class: public class TimeKeeper { public TimeSpan Measure(Action[] actions) { var watch = new Stopwatch(); watch.Start(); foreach (var action in actions) { action(); } return watch.Elapsed; } } But I can not call the Measure mehotd for the example below: var elpased = new TimeKeeper(); elpased.Measure( () => new Action[] { FillArray(ref a, "a", 10000), FillArray(ref a, "a", 10000), FillArray(ref a, "a", 10000) }); I get the following errors: Cannot convert lambda expression to type 'System.Action[]' because it is not a delegate type Cannot implicitly convert type 'void' to 'System.Action' Cannot implicitly convert type 'void' to 'System.Action' Cannot implicitly convert type 'void' to 'System.Action' Here is the method that works with arrays: private void FillArray(ref int[] array, string name, int count) { array = new int[count]; for (int i = 0; i < array.Length; i++) { array[i] = i; } Console.WriteLine("Array {0} is now filled up with {1} values", name, count); } What I am doing wrong?

    Read the article

  • java: libraries for immutable functional-style data structures

    - by Jason S
    This is very similar to another question (Functional Data Structures in Java) but the answers there are not particularly useful. I need to use immutable versions of the standard Java collections (e.g. HashMap / TreeMap / ArrayList / LinkedList / HashSet / TreeSet). By "immutable" I mean immutable in the functional sense (e.g. purely functional data structures), where updating operations on the data structure do not change the original data, but instead return a new instance of the same kind of data structure. Also typically new and old instances of the data structure will share immutable data to be efficient in time and space. From what I can tell my options include: Functional Java Scala Clojure but I'm not sure whether any of these are particularly appealing to me. I have a few requirements/desirements: the collections in question should be usable directly in Java (with the appropriate libraries in the classpath). FJ would work for me; I'm not sure if I can use Scala's or Clojure's data structures in Java w/o having to use the compilers/interpreters from those languages and w/o having to write Scala or Clojure code. Core operations on lists/maps/sets should be possible w/o having to create function objects with confusing syntaxes (FJ looks slightly iffy) They should be efficient in time and space. I'm looking for a library which ideally has done some performance testing. FJ's TreeMap is based on a red-black tree, not sure how that rates. Documentation / tutorials should be good enough so someone can get started quickly using the data structures. FJ fails on that front. Any suggestions?

    Read the article

  • Is this 2D array initialization a bad idea?

    - by Brendan Long
    I have something I need a 2D array for, but for better cache performance, I'd rather have it actually be a normal array. Here's the idea I had but I don't know if it's a terrible idea: const int XWIDTH = 10, YWIDTH = 10; int main(){ int * tempInts = new int[XWIDTH * YWIDTH]; int ** ints = new int*[XWIDTH]; for(int i=0; i<XWIDTH; i++){ ints[i] = &tempInts[i*YWIDTH]; } // do things with ints delete[] ints[0]; delete[] ints; return 0; } So the idea is that instead of newing a bunch of arrays (and having them placed in different places in memory), I just point to an array I made all at once. The reason for the delete[] (int*) ints; is because I'm actually doing this in a class and it would save [trivial amounts of] memory to not save the original pointer. Just wondering if there's any reasons this is a horrible idea. Or if there's an easier/better way. The goal is to be able to access the array as ints[x][y] rather than ints[x*YWIDTH+y].

    Read the article

  • Could I do this blind relative to absolute path conversion (for perforce depot paths) better?

    - by wonderfulthunk
    I need to "blindly" (i.e. without access to the filesystem, in this case the source control server) convert some relative paths to absolute paths. So I'm playing with dotdots and indices. For those that are curious I have a log file produced by someone else's tool that sometimes outputs relative paths, and for performance reasons I don't want to access the source control server where the paths are located to check if they're valid and more easily convert them to their absolute path equivalents. I've gone through a number of (probably foolish) iterations trying to get it to work - mostly a few variations of iterating over the array of folders and trying delete_at(index) and delete_at(index-1) but my index kept incrementing while I was deleting elements of the array out from under myself, which didn't work for cases with multiple dotdots. Any tips on improving it in general or specifically the lack of non-consecutive dotdot support would be welcome. Currently this is working with my limited examples, but I think it could be improved. It can't handle non-consecutive '..' directories, and I am probably doing a lot of wasteful (and error-prone) things that I probably don't need to do because I'm a bit of a hack. I've found a lot of examples of converting other types of relative paths using other languages, but none of them seemed to fit my situation. These are my example paths that I need to convert, from: //depot/foo/../bar/single.c //depot/foo/docs/../../other/double.c //depot/foo/usr/bin/../../../else/more/triple.c to: //depot/bar/single.c //depot/other/double.c //depot/else/more/triple.c And my script: begin paths = File.open(ARGV[0]).readlines puts(paths) new_paths = Array.new paths.each { |path| folders = path.split('/') if ( folders.include?('..') ) num_dotdots = 0 first_dotdot = folders.index('..') last_dotdot = folders.rindex('..') folders.each { |item| if ( item == '..' ) num_dotdots += 1 end } if ( first_dotdot and ( num_dotdots > 0 ) ) # this might be redundant? folders.slice!(first_dotdot - num_dotdots..last_dotdot) # dependent on consecutive dotdots only end end folders.map! { |elem| if ( elem !~ /\n/ ) elem = elem + '/' else elem = elem end } new_paths << folders.to_s } puts(new_paths) end

    Read the article

  • Developing a 2D Game for Windows Phone 8

    - by Vaccano
    I would like to develop a 2D game for Windows Phone 8. I am a professional Application Developer by day and this seems like a fun hobby. But I have been disapointed trying to get going. It seems that 2D games (far and away the majority of games) do not have any real support. It seems the Windows Phone makers did not include support for Direct2D. So unless you are planning to make a fully 3D app, you are out of luck. So, if you just wanted to make a nice 2D app, these are your choices: Write your game using Xaml and C# (Performance Issues?) Write your game using Direct3D and but only draw on one plane. Use the DirectX Took Kit found on codeplex. It allows you to use the dying XNA framework's API for development. Number 3 seems the best for my game. But I hate to waste my time learning the XNA api when Microsoft has clearly stated that it is not going to be supported going forward. Number 2 would work, but 3D development is really hard. I would rather not have to do all that to get the 2D effect. (Assuming Direct2D is easier. I have yet to look into that.) Number 1 seems the easiest, but I worry that my app will not run well if it is based off of xaml rendering rather than DirectX. What is the suggested method from Microsoft? And who decided that 2D games were going to get shortchanged?

    Read the article

  • Fastest way to clamp a real (fixed/floating point) value?

    - by Niklas
    Hi, Is there a more efficient way to clamp real numbers than using if statements or ternary operators? I want to do this both for doubles and for a 32-bit fixpoint implementation (16.16). I'm not asking for code that can handle both cases; they will be handled in separate functions. Obviously, I can do something like: double clampedA; double a = calculate(); clampedA = a > MY_MAX ? MY_MAX : a; clampedA = a < MY_MIN ? MY_MIN : a; or double a = calculate(); double clampedA = a; if(clampedA > MY_MAX) clampedA = MY_MAX; else if(clampedA < MY_MIN) clampedA = MY_MIN; The fixpoint version would use functions/macros for comparisons. This is done in a performance-critical part of the code, so I'm looking for an as efficient way to do it as possible (which I suspect would involve bit-manipulation) EDIT: It has to be standard/portable C, platform-specific functionality is not of any interest here. Also, MY_MIN and MY_MAX are the same type as the value I want clamped (doubles in the examples above).

    Read the article

  • The question about the basics of LINQ to SQL working

    - by Alex
    I just started learning LINQ to SQL, and so far I'm impressed with the easy of use and good performance. I used to think that when doing LINQ queries like from Customer in DB.Customers where Customer.Age > 30 select Customer Get all customers from the database ("SELECT * FROM Customers"), move them to the Customers array and then make a search in that Array using .NET methods. This is very inefficient, what if there are hundreds of thousands of customers in the database? Making such big SELECT queries would kill the web application. Now after experiencing how actually fast LINQ to SQL is, I start to suspect that when doing that query I just wrote, LINQ somehow converts it to a SQL Query string SELECT * FROM Customers WHERE Age > 30 And only when necessary it will run the query. So my question is: am I right? And when is the query actually run? The reason why I'm asking is not only because I want to understand how it works in order to build good optimized applications, but because I came across the following problem. I have 2 tables, one of them is Books, the other has information on how many books were sold on certain days. My goal is to select books that had at least 50 sales/day in past 10 days. It's done with this simple query: from Book in DB.Books where (from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID).Contains(Book.ID) select Book The point is, I have to use the checking part in several queries and I decided to create an array with IDs of all popular books: var popularBooksIDs = from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID; BUT when I try to do the query now: from Book in DB.Books where popularBooksIDs.Contains(Book.ID) select Book It doesn't work! That's why I think that we can't use thins kinds of shortcuts in LINQ to SQL queries, like we can't use them in real SQL. We have to create straightforward queries, am I right?

    Read the article

  • Multithreaded linked list traversal

    - by Rob Bryce
    Given a (doubly) linked list of objects (C++), I have an operation that I would like multithread, to perform on each object. The cost of the operation is not uniform for each object. The linked list is the preferred storage for this set of objects for a variety of reasons. The 1st element in each object is the pointer to the next object; the 2nd element is the previous object in the list. I have solved the problem by building an array of nodes, and applying OpenMP. This gave decent performance. I then switched to my own threading routines (based off Windows primitives) and by using InterlockedIncrement() (acting on the index into the array), I can achieve higher overall CPU utilization and faster through-put. Essentially, the threads work by "leap-frog'ing" along the elements. My next approach to optimization is to try to eliminate creating/reusing the array of elements in my linked list. However, I'd like to continue with this "leap-frog" approach and somehow use some nonexistent routine that could be called "InterlockedCompareDereference" - to atomically compare against NULL (end of list) and conditionally dereference & store, returning the dereferenced value. I don't think InterlockedCompareExchangePointer() will work since I cannot atomically dereference the pointer and call this Interlocked() method. I've done some reading and others are suggesting critical sections or spin-locks. Critical sections seem heavy-weight here. I'm tempted to try spin-locks but I thought I'd first pose the question here and ask what other people are doing. I'm not convinced that the InterlockedCompareExchangePointer() method itself could be used like a spin-lock. Then one also has to consider acquire/release/fence semantics... Ideas? Thanks!

    Read the article

  • How can I tell if a byte array has already been compressed?

    - by MikeG
    Hi, Can I rely on the first few bytes of data compressed using the System.IO.Compression.DeflateStream in .NET always being the same? These bytes seem to always be the 1st bytes: 237, 189, 7, 96, 28, 73, 150, 37, 38, 47 , ... I'm assuming this is some kind of header, I'd like to assume that this header is fixed and isn't going to change. Has anyone got any extra info about this? Background info (The reason I want to know this info is...) I have a load of data in a database table that could do with being made smaller. I've decided I'm going to start compressing the data and not going to bother compressing the existing data. When the data gets into my .NET code the data is a String. I'd like to be able to look at the 1st few bytes of the string and see if it has been compressed, if it has then I need to de-compress it. I was originally thinking I could convert the string to bytes and just try de-compressing the data. Then if an exception happens, I could just assume it wasn't compressed. But I think checking the header bytes would give me much better performance. Many thanks, Mike G

    Read the article

  • Lightweight HTTP application/server for static content

    - by PartlyCloudy
    Hi, I am in need of a scalable and performant HTTP application/server that will be used for static file serving/uploading. So I only need support for GET and PUT operations. However, there are a few extra features that I need: Custom authentication: I need to check credentials against a database for each request. Thus I must be able to integrate propietary database interaction. Support for signed access keys: The access to resources via PUT should be signed using a key like http://uri/?key=foo The key then contains information about the request like md5(user + path + secret) which allows me to block unwanted requests. The application/server should allow me to check for this. Performance: I'd like to avoid piping content as much as possible. Otherwise the whole application could be implemented in Perl/etc. in a few lines as CGI. Perlbal (in webserver mode) looks nice, however the single-threaded model does not fit with my database lookup and it does also not support query strings. Lighttp/Nginx/… have some modules for these tasks, however it is not feasible putting everything together without ending up writing own extensions/modules. So how would you solve this? Are there other leightweight webservers available for this? Should I implement an application inside of a webserver (i.e. CGI). How can I avoid/speed up piping content between the webserver and my application. Thanks in advance!

    Read the article

  • On the search for my next great .Net Read

    - by user127954
    Just got done with "The art of unit testing". It was a great read and i think everyone should go buy a copy. With that said i think the next book I'm like to read would be a architecture / Design type book that would focus heavily on building your objects / software in such a way that it would be: Low Coupling High Cohesion Easily Maintainable / Extended Easy to test Easy to Navigate / Debug The above characteristcs are the most important ones but also maybe it would also include (but not necessary) designing for: Performance - Don't want to design a system at at the end find out its dog slow :) Scalability - Again don't want to design something at the end find out it won't scale. I'd also prefer (but not necessary again): Something newer - Architectural principles seem to gradually evolve / improve over time and id like something with current thinking. .Net as illustrating language - like i said above its not mandatory but since its what i use every day id prefer it to be in .net. Doesn't really matter if its in vb.net or c# Some of the topics that would be talked about its how to minimize dependencies and using interfaces throughout your solution rather than concrete classes. Maybe it would constract /compare some of the newest design principles like DDD, Repository Pattern, Ect... I already have "Clean Code" (don't know if its this type of book or not) and "Working effectively with legacy code" on my radar but id like to read a book based upon the topic i talked about above first. Is there such a book?

    Read the article

  • PHP modifying and combining array

    - by Industrial
    Hi everyone, I have a bit of an array headache going on. The function does what I want, but since I am not yet to well acquainted with PHP:s array/looping functions, so thereby my question is if there's any part of this function that could be improved from a performance-wise perspective? I tried to be as complete as possible in my descriptions in each stage of the functions which shortly described prefixes all keys in an array, fill up eventual empty/non-valid keys with '' and removes the prefixes before returning the array: $var = myFunction ( array('key1', 'key2', 'key3', '111') ); function myFunction ($keys) { $prefix = 'prefix_'; $keyCount = count($keys); // Prefix each key and remove old keys for($i=0;$i<$keyCount; $i++){ $keys[] = $prefix.$keys[$i]; unset($keys[$i]); } // output: array('prefix_key1', 'prefix_key2', 'prefix_key3', '111) // Get all keys from memcached. Only returns valid keys $items = $this->memcache->get($keys); // output: array('prefix_key1' => 'value1', 'prefix_key2' => 'value2', 'prefix_key3'=>'value3) // note: key 111 was not found in memcache. // Fill upp eventual keys that are not valid/empty from memcache $return = $items + array_fill_keys($keys, ''); // output: array('prefix_key1' => 'value1', 'prefix_key2' => 'value2', 'prefix_key3'=>'value3, 'prefix_111' => '') // Remove the prefixes for each result before returning array to application foreach ($return as $k => $v) { $expl = explode($prefix, $k); $return[$expl[1]] = $v; unset($return[$k]); } // output: array('key1' => 'value1', 'key2' => 'value2', 'key3'=>'value3, '111' => '') return $return; } Thanks a lot!

    Read the article

  • Creating collaborative whiteboard drawing application

    - by Steven Sproat
    I have my own drawing program in place, with a variety of "drawing tools" such as Pen, Eraser, Rectangle, Circle, Select, Text etc. It's made with Python and wxPython. Each tool mentioned above is a class, which all have polymorphic methods, such as left_down(), mouse_motion(), hit_test() etc. The program manages a list of all drawn shapes -- when a user has drawn a shape, it's added to the list. This is used to manage undo/redo operations too. So, I have a decent codebase that I can hook collaborative drawing into. Each shape could be changed to know its owner -- the user who drew it, and to only allow delete/move/rescale operations to be performed on shapes owned by one person. I'm just wondering the best way to develop this. One person in the "session" will have to act as the server, I have no money to offer free central servers. Somehow users will need a way to connect to servers, meaning some kind of "discover servers" browser...or something. How do I broadcast changes made to the application? Drawing in realtime and broadcasting a message on each mouse motion event would be costly in terms of performance and things get worse the more users there are at a given time. Any ideas are welcome, I'm not too sure where to begin with developing this (or even how to test it)

    Read the article

  • Common "truisms" needing correction the most

    - by Charles Bretana
    In addition to "I never met a man I didn't like", Will Rogers had another great little ditty I've always remembered. It went: "It's not what you don't know that'll hurt you, it's what you do know that ain't so." We all know or subscribe to many IT "truisms" that mostly have a strong basis in fact, in something in our professional careers, something we learned from others, lessons learned the hard way by ourselves, or by others who came before us. Unfortuntely, as these truisms spread throughout the community, the details—why they came about and the caveats that affect when they apply—tend to not spread along with them. We all have a tendency to look for, and latch on to, small "rules" or principles that we can use to avoid doing a complete exhaustive analysis for every decision. But even though they are correct much of the time, when we sometimes misapply them, we pay a penalty that could be avoided by understooding the details behind them. For example, when user-defined functions were first introduced in SQL Server it became "common knowledge" within a year or so that they had extremely bad performance (because it required a re-compilation for each use) and should be avoided. This "trusim" still increases many database developers' aversion to using UDFs, even though Microsoft's introduction of InLine UDFs, which do not suffer from this issue at all, mitigates this issue substantially. In recent years I have run into numerous DBAs who still believe you should "never" use UDFs, because of this. What other common not-so-"trusims" do you know, which many developers believe, that are not quite as universally true as is commonly understood, and which the developer community would benefit from being better educated about? Please include why it was "true" to start off with, and under what circumstances it's not true. Limit responses to issues that are technical, where the "common" application of a "rule or principle" is in fact correct most of the time, or was correct back when it was first elucidated, but—in the edge cases, or because of not understanding the principle thoroughly, because technology has changed since it first spread, or applying the rule today without understanding the details behind the rule—can easily backfire or cause the opposite of the intended effect.

    Read the article

  • Implement a threading to prevent UI block on a bug in an async function

    - by Marcx
    I think I ran up againt a bug in an async function... Precisely the getDirectoryListingAsync() of the File class... This method is supposted to return an object containing the lists of files in a specified folder. I found that calling this method on a direcory with a lot of files (in my tests more than 20k files), after few seconds there is a block on the UI until the process is completed... I think that this method is separated in two main block: 1) get the list of files 2) create the array with the details of the files The point 1 seems to be async (for a few second the ui is responsive), then when the process pass from point 1 to point 2 the block of the UI occurs until the complete event is dispathed... Here's some (simple) code: private function checkFiles(dir:File):void { if (dir.exists) { dir.addEventListener( FileListEvent.DIRECTORY_LISTING, listaImmaginiLocale); dir.getDirectoryListingAsync(); // after this point, for the firsts seconds the UI respond well (point 1), // few seconds later (point 2) the UI is frozen } } private function listaImmaginiLocale( event:FileListEvent ):void { // from this point on the UI is responsive again... } Actually in my projects there are some function that perform an heavy cpu usage and to prevent the UI block I implemented a simple function that after some iteration will wait giving time to UI to be refreshed. private var maxIteration:int = 150000; private function sampleFunct(offset:int = 0) :void { if (offset < maxIteration) { // do something // call the recursive function using a timeout.. // if the offset in multiple by 1000 the function will wait 15 millisec, // otherwise it will be called immediately // 1000 is a random number for the pourpose of this example, but I usually change the // value based on how much heavy is the function itself... setTimeout(function():void{aaa(++offset);}, (offset%1000?15:0)); } } Using this method I got a good responsive UI without afflicting performance... I'd like to implement it into the getDirectoryListingAsync method but I don't know if it's possibile how can I do it where is the file to edit or extend.. Any suggestion???

    Read the article

  • return Queryable<T> or List<T> in a Repository<T>

    - by Danny Chen
    Currently I'm building an windows application using sqlite. In the data base there is a table say User, and in my code there is a Repository<User> and a UserManager. I think it's a very common design. In the repository there is a List method: //Repository<User> class public List<User> List(where, orderby, topN parameters and etc) { //query and return } This brings a problem, if I want to do something complex in UserManager.cs: //UserManager.cs public List<User> ListUsersWithBankAccounts() { var userRep = new UserRepository(); var bankRep = new BankAccountRepository(); var result = //do something complex, say "I want the users live in NY //and have at least two bank accounts in the system } You can see, returning List<User> brings performance issue, becuase the query is executed earlier than expected. Now I need to change it to something like a IQueryable<T>: //Repository<User> class public TableQuery<User> List(where, orderby, topN parameters and etc) { //query and return } TableQuery<T> is part of the sqlite driver, which is almost equals to IQueryable<T> in EF, which provides a query and won't execute it immediately. But now the problem is: in UserManager.cs, it doesn't know what is a TableQuery<T>, I need to add new reference and import namespaces like using SQLite.Query in the business layer project. It really brings bad code feeling. Why should my business layer know the details of the database? why should the business layer know what's SQLite? What's the correct design then?

    Read the article

  • I'm annoyed with asp .net mvc action links? Is there something better in MVC3?

    - by Jonathon Kresner
    After almost 3 years with mvc I'm scratching my head. Is it just me, or does the way we specify links in asp .net mvc suck? @Html.ActionLink("Log Off", "LogOff", "Account") In the previews for mvc 1 we had the funky generic action links which gave us intellisense and compile checking, which I LOVED. I know they removed them because of performance issues and because you could not actually guarantee that the route would resolve all the time... However the default way of doing it just doesn't make me feel safe enough in a big application. I've also used T4Mvc with MVC2, to be honest, I didn't really like it. It's not part of the Mvc framework and frustrating to develop with especially with source control in big teams and continuous integration builds. I guess I could also import Mvc Futures and keep using the generic types (it's probably what I'll do). I'm just about to start a very big project and was wondering what other people are thinking? Is anyone else annoyed with the options or has a new solution? It seems like ActionLinks are the most basic & frequently used feature. Shouldn't there be a good out of the box solution, we're just about to hit revision 3 of this framework.

    Read the article

  • Nhibernate + Gridview + TargetInvocationException

    - by Scott
    For our grid views, we're setting the data sources as a list of results from an Nhibernate query. We're using lazy loading, so the objects are actually proxied... most of the time. In some instances the list will consist of types of Student and Composition_Aop_Proxy_jklasjdkl31231, which implements the same members as the Student class. We've still got the session open, so the lazy loading would resolve fine, if GridView didn't throw an error about the different types in the gridview. Our current workaround is to clone the object, which results in fetching all of the data that can be lazily loaded, even though most of it won't be accessed.. ever. This, however, converts the proxy into an actual object and the grid view is happy. The performance implications kind of scare me as we're getting closer to rolling the code out as is. I've tried evicting the object after a save, which should ensure that everything is a proxy, but this doesn't seem like a good idea either. Does anyone have any suggestions/workarounds?

    Read the article

  • Android opengl releasing textures

    - by user1642418
    I have a bit of a problem. I am developing a game for android + engine and I got stuck. I am getting OpenGL out of memory error and either app crashes or phone hangs after loading a scene multiple times. For example: app launches, shows main menu, 1st level/scene is loaded. Then I go back to main menu, and repeat. It doesnt matter which scene I load, after 4-6 times the error occurs. Some background: Each time when scene is loaded all the resources are released and upon first frame render - needed stuff gets loaded. The performance is more or less ok. Note that I am calling glDeleteTexture method, but I think its not doing its job and releasing memory. Thing is that -when I minimize and open it again - problem doesn't occur, but almost the same things are executed. Problem doesn't occur. This way android releases memory. How do I release/get rid of unused textures properly? This happens on HTC Desire HD ( ice cream sandwich 4.0.4) . Other games works fine, so I bet this is not the problem in ROM.

    Read the article

  • Updating UI objects in windows forms

    - by P a u l
    Pre .net I was using MFC, ON_UPDATE_COMMAND_UI, and the CCmdUI class to update the state of my windows UI. From the older MFC/Win32 reference: Typically, menu items and toolbar buttons have more than one state. For example, a menu item is grayed (dimmed) if it is unavailable in the present context. Menu items can also be checked or unchecked. A toolbar button can also be disabled if unavailable, or it can be checked. Who updates the state of these items as program conditions change? Logically, if a menu item generates a command that is handled by, say, a document, it makes sense to have the document update the menu item. The document probably contains the information on which the update is based. If a command has multiple user-interface objects (perhaps a menu item and a toolbar button), both are routed to the same handler function. This encapsulates your user-interface update code for all of the equivalent user-interface objects in a single place. The framework provides a convenient interface for automatically updating user-interface objects. You can choose to do the updating in some other way, but the interface provided is efficient and easy to use. What is the guidance for .net Windows Forms? I am using an Application.Idle handler in the main form but am not sure this is the best way to do this. About the time I put all my UI updates in the Idle event handler my app started to show some performance problems, and I don't have the metrics to track this down yet. Not sure if it's related.

    Read the article

  • Best Functional Approach

    - by dbyrne
    I have some mutable scala code that I am trying to rewrite in a more functional style. It is a fairly intricate piece of code, so I am trying to refactor it in pieces. My first thought was this: def iterate(count:Int,d:MyComplexType) = { //Generate next value n //Process n causing some side effects return iterate(count - 1, n) } This didn't seem functional at all to me, since I still have side effects mixed throughout my code. My second thought was this: def generateStream(d:MyComplexType):Stream[MyComplexType] = { //Generate next value n return Stream.cons(n, generateStream(n)) } for (n <- generateStream(initialValue).take(2000000)) { //process n causing some side effects } This seemed like a better solution to me, because at least I've isolated my functional value-generation code from the mutable value-processing code. However, this is much less memory efficient because I am generating a large list that I don't really need to store. This leaves me with 3 choices: Write a tail-recursive function, bite the bullet and refactor the value-processing code Use a lazy list. This is not a memory sensitive app (although it is performance sensitive) Come up with a new approach. I guess what I really want is a lazily evaluated sequence where I can discard the values after I've processed them. Any suggestions?

    Read the article

  • cython setup.py gives .o instead of .dll

    - by alok1974
    Hi, I am a newbie to cython, so pardon me if I am missing something obvious here. I am trying to build c extensions to be used in python for enhanced performance. I have fc.py module with a bunch of function and trying to generate a .dll through cython using dsutils and running on win64: c:\python26\python c:\cythontest\setup.py build_ext --inplace I have the dsutils.cfg in C:\Python26\Lib\distutils. As required the disutils.cfg has the following config settings: [build] compiler = mingw32 My startup.py looks like this: from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext ext_modules = [Extension('fc', [r'C:\cythonTest\fc.pyx'])] setup( name = 'FC Extensions', cmdclass = {'build_ext': build_ext}, ext_modules = ext_modules ) I have latest version mingw for target/host amdwin64 type builds. I have the latest version of cython for python26 for win64. Cython does give me an fc.c without errors, only a few warning for type conversions, which I will handle once I have it right. Further it produces fc.def an fc.o files Instead of giving a .dll. I get no errors. I find on threads that it will create the .so or .dll automatically as required, which is not happening.

    Read the article

  • Sql Server 2005 multiple insert with c#

    - by bottlenecked
    Hello. I have a class named Entry declared like this: class Entry{ string Id {get;set;} string Name {get;set;} } and then a method that will accept multiple such Entry objects for insertion into the database using ADO.NET: static void InsertEntries(IEnumerable<Entry> entries){ //build a SqlCommand object using(SqlCommand cmd = new SqlCommand()){ ... const string refcmdText = "INSERT INTO Entries (id, name) VALUES (@id{0},@name{0});"; int count = 0; string query = string.Empty; //build a large query foreach(var entry in entries){ query += string.Format(refcmdText, count); cmd.Parameters.AddWithValue(string.Format("@id{0}",count), entry.Id); cmd.Parameters.AddWithValue(string.Format("@name{0}",count), entry.Name); count++; } cmd.CommandText=query; //and then execute the command ... } } And my question is this: should I keep using the above way of sending multiple insert statements (build a giant string of insert statements and their parameters and send it over the network), or should I keep an open connection and send a single insert statement for each Entry like this: using(SqlCommand cmd = new SqlCommand(){ using(SqlConnection conn = new SqlConnection(){ //assign connection string and open connection ... cmd.Connection = conn; foreach(var entry in entries){ cmd.CommandText= "INSERT INTO Entries (id, name) VALUES (@id,@name);"; cmd.Parameters.AddWithValue("@id", entry.Id); cmd.Parameters.AddWithValue("@name", entry.Name); cmd.ExecuteNonQuery(); } } } What do you think? Will there be a performance difference in the Sql Server between the two? Are there any other consequences I should be aware of? Thank you for your time!

    Read the article

  • What's so bad about building XML with string concatenation?

    - by wsanville
    In the thread What’s your favorite “programmer ignorance” pet peeve?, the following answer appears, with a large amount of upvotes: Programmers who build XML using string concatenation. My question is, why is building XML via string concatenation (such as a StringBuilder in C#) bad? I've done this several times in the past, as it's sometimes the quickest way for me to get from point A to point B when to comes to the data structures/objects I'm working with. So far, I have come up with a few reasons why this isn't the greatest approach, but is there something I'm overlooking? Why should this be avoided? Probably the biggest reason I can think of is you need to escape your strings manually, and most programmers will forget this. It will work great for them when they test it, but then "randomly" their apps will fail when someone throws an & symbol in their input somewhere. Ok, I'll buy this, but it's really easy to prevent the problem (SecurityElement.Escape to name one). When I do this, I usually omit the XML declaration (i.e. <?xml version="1.0"?>). Is this harmful? Performance penalties? If you stick with proper string concatenation (i.e. StringBuilder), is this anything to be concerned about? Presumably, a class like XmlWriter will also need to do a bit of string manipulation... There are more elegant ways of generating XML, such as using XmlSerializer to automatically serialize/deserialize your classes. Ok sure, I agree. C# has a ton of useful classes for this, but sometimes I don't want to make a class for something really quick, like writing out a log file or something. Is this just me being lazy? If I am doing something "real" this is my preferred approach for dealing w/ XML.

    Read the article

< Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >