Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 526/594 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • Best approach to storing image pixels in bottom-up order in Java

    - by finnw
    I have an array of bytes representing an image in Windows BMP format and I would like my library to present it to the Java application as a BufferedImage, without copying the pixel data. The main problem is that all implementations of Raster in the JDK store image pixels in top-down, left-to-right order whereas BMP pixel data is stored bottom-up, left-to-right. If this is not compensated for, the resulting image will be flipped vertically. The most obvious "solution" is to set the SampleModel's scanlineStride property to a negative value and change the band offsets (or the DataBuffer's array offset) to point to the top-left pixel, i.e. the first pixel of the last line in the array. Unfortunately this does not work because all of the SampleModel constructors throw an exception if given a negative scanlineStride argument. I am currently working around it by forcing the scanlineStride field to a negative value using reflection, but I would like to do it in a cleaner and more portable way if possible. e.g. is there another way to fool the Raster or SampleModel into arranging the pixels in bottom-up order but without breaking encapsulation? Or is there a library somewhere that will wrap the Raster and SampleModel, presenting the pixel rows in reverse order? I would prefer to avoid the following approaches: Copying the whole image (for performance reasons. The code must process hundreds of large (= 1Mpixels) images per second and although the whole image must be available to the application, it will normally access only a tiny (but hard-to-predict) portion of the image.) Modifying the DataBuffer to perform coordinate transformation (this actually works but is another "dirty" solution because the buffer should not need to know about the scanline/pixel layout.) Re-implementing the Raster and/or SampleModel interfaces from scratch (but I have a hunch that I will be unable to avoid this.)

    Read the article

  • Problem with Binding Multiple Objects on WPF

    - by Diego Modolo Ribeiro
    Hi, I'm developing a user control called SearchBox, that will act as a replacer for ComboBox for multiple records, as the performance in ComboBox is low for this scenario, but I'm having a big binding problem. I have a base control with a DependencyProperty called SelectedItem, and my SearchControl shows that control so that the user can select the record. The SearchControl also has a DependencyProperty called SelectedItem on it, and it is being updated when the user selects something on the base control, but the UI, wich has the SearchControl, is not being updated. I used a null converter so that I could see if the property was updating, but it's not. Below is some of the code: This is the UI: <NectarControles:MaskedSearchControl x:Name="teste" Grid.Row="0" Grid.Column="1" Height="21" ItemsSource="{Binding C001_Lista}" DisplayMember="C001_Codigo" Custom:CustomizadorTextBox.CodigoCampo="C001_Codigo" ViewType="{x:Type C001:C001_Search}" ViewModelType="{x:Type C001_ViewModel:C001_ViewModel}" SelectedItem="{Binding Path=Instancia.C001_Holding, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged, Converter={StaticResource DebuggingConverter}}"> </NectarControles:MaskedSearchControl> This is the relevante part of the SearchControl: Binding selectedItemBinding = new Binding("SelectedItem"); selectedItemBinding.Source = this.SearchControlBase; selectedItemBinding.Mode = BindingMode.TwoWay; selectedItemBinding.UpdateSourceTrigger = UpdateSourceTrigger.PropertyChanged; this.SetBinding(SelectedItemProperty, selectedItemBinding); ... public System.Data.Objects.DataClasses.IEntityWithKey SelectedItem { get { return (System.Data.Objects.DataClasses.IEntityWithKey)GetValue(SelectedItemProperty); } set { SetValue(SelectedItemProperty, value); if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs("SelectedItem")); } } Please someone help me... Tks...

    Read the article

  • Writing a JMS Publisher without "public static void main"

    - by The Elite Gentleman
    Hi guys, Every example I've seen on the web, e.g. http://www.codeproject.com/KB/docview/jms_to_jms_bridge_activem.aspx, creates a publisher and subscriber with a public static void main method. I don't think that'll work for my web application. I'm learning JMS and I've setup Apache ActiveMQ to run on JBoss 5 and Tomcat 6 (with no glitches). I'm writing a messaging JMS service that needs to send email asynchronously. I've already written a JMS subscriber that receives the message (the class inherits MessageListener). My question is simple: How do I write a publisher that will so that my web applications can call it? Does it have to be published somewhere? My thought is to create a publisher with a no-attribute constructor (in there) and get the MessageQueue Factory, etc. from the JNDI pool (in the constructor). Is my idea correct? How do I subscribe my subscriber to the Queue Receiver? (So far, the subscriber has no constructor, and if I write a constructor, do I always subscribe myself to the Queue receiver?) Thanks for your help, sorry if my terminology is not up to scratch, there are too many java terminologies that I get lost sometimes (maybe a java GPS will do! :-) ) PS Is there a tutorial out there that explains how to write a "better" (better can mean anything, but in my case it's all about performance in high demand requests) JMS Publisher and Subscriber that I can run on Application Server such as JBoss or Glassfish? Don't forget that the JMS application will needs a "guarantee" uptime as many applications will use this.

    Read the article

  • Design: Website calling a webservice on the same machine

    - by Chris L
    More of a design/conceptual question. At work the decision was made to have our data access layer be called through webservices. So our website would call the webservices for any/all data to and from the database. Both the website & the webservices will be on the same machine(so no trip across the wire), but the database is on a separate machine(so that would require a trip across the wire regardless). This is all in-house, the website, webservice, and database are all within the same company(AFAIK, the webservices won't be reused by another other party). To the best of my knowledge: the website will open a port to the webservices, and the webservices will in turn open another port and go across the wire to the database server to get/submit the data. The trip across the wire can't be avoided, but I'm concerned about the webservices standing in the middle. I do agree there needs to be distinct layers between the functionality(such as business layer, data access layer, etc...), but this seems overly complex to me. I'm also sensing there will be some performance problems down the line. Seems to me it would be better to have the (DAL)assemblies referenced directly within the solution, thus negating the first port to port connection. Any thoughts(or links) both for and against this idea would be appreciated P.S. We're a .NET shop(migrating from vb to C# 3.5)

    Read the article

  • Pitfalls and practical Use-Cases: Toplink, Hibernate, Eclipse Link, Ibatis ...

    - by Martin K.
    I worked a lot with Hibernate as my JPA implementation. In most cases it works fine! But I have also seen a lot of pitfalls: Remoting with persisted Objects is difficult, because Hibernate replaces the Java collections with its own collection implementation. So the every client must have the Hibernate .jar libraries. You have to take care on LazyLoading exceptions etc. One way to get around this problem is the use of webservices. Dirty checking is done against the Database without any lock. "Delayed SQL", causes that the data access isn't ACID compliant. (Lost data...) Implict Updates So we don't know if an object is modified or not (commit causes updates). Are there similar issues with Toplink, Eclipse Link and Ibatis? When should I use them? Have they a similar performance? Are there reasons to choose Eclipse Link/Toplink... over Hibernate?

    Read the article

  • Registry access error when Migrating ASP.NET application to IIS7

    - by fande455
    I'm running windows 7 64-bit and iis7. I'm trying to setup a web application that was previously in iis6 on XP. It's giving me the error below. I've added the network service user to the Performance Monitor Users group to no avail. Access to the registry key 'Global' is denied. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.UnauthorizedAccessException: Access to the registry key 'Global' is denied. ASP.NET is not authorized to access the requested resource. Consider granting access rights to the resource to the ASP.NET request identity. ASP.NET has a base process identity (typically {MACHINE}\ASPNET on IIS 5 or Network Service on IIS 6) that is used if the application is not impersonating. If the application is impersonating via , the identity will be the anonymous user (typically IUSR_MACHINENAME) or the authenticated request user. To grant ASP.NET access to a file, right-click the file in Explorer, choose "Properties" and select the Security tab. Click "Add" to add the appropriate user or group. Highlight the ASP.NET account, and check the boxes for the desired access.

    Read the article

  • How to invalidate cache when benchmarking?

    - by Michael Buen
    I have this code, that when swapping the order of UsingAs and UsingCast, their performance also swaps. using System; using System.Diagnostics; using System.Linq; using System.IO; class Test { const int Size = 30000000; static void Main() { object[] values = new MemoryStream[Size]; UsingAs(values); UsingCast(values); Console.ReadLine(); } static void UsingCast(object[] values) { Stopwatch sw = Stopwatch.StartNew(); int sum = 0; foreach (object o in values) { if (o is MemoryStream) { var m = (MemoryStream)o; sum += (int)m.Length; } } sw.Stop(); Console.WriteLine("Cast: {0} : {1}", sum, (long)sw.ElapsedMilliseconds); } static void UsingAs(object[] values) { Stopwatch sw = Stopwatch.StartNew(); int sum = 0; foreach (object o in values) { if (o is MemoryStream) { var m = o as MemoryStream; sum += (int)m.Length; } } sw.Stop(); Console.WriteLine("As: {0} : {1}", sum, (long)sw.ElapsedMilliseconds); } } Outputs: As: 0 : 322 Cast: 0 : 281 When doing this... UsingCast(values); UsingAs(values); ...Results to this: Cast: 0 : 322 As: 0 : 281 When doing just this... UsingAs(values); ...Results to this: As: 0 : 322 When doing just this: UsingCast(values); ...Results to this: Cast: 0 : 322 Aside from running them independently, how to invalidate the cache so the second code being benchmarked won't receive the cached memory of first code? Benchmarking aside, just loved the fact that modern processors do this caching magic :-)

    Read the article

  • Which Javascript framework (jQuery vs Dojo vs ... )?

    - by cletus
    There are a few Javascript frameworks/toolets out there, such as: jQuery; Dojo; Prototype; YUI; MooTools; ExtJS; SmartClient; and others I'm sure. It certainly seems that jQuery is ascendant in terms of mindshare at the moment. For example, Microsoft (ASP.NET MVC) and Nokia will use it. I also found this this performance comparison of Dojo, jQuery, MooTools and Prototype (Edit: Updated Comparison), which looks highly favourable to Dojo and jQuery. Now my previous experience with Javascript has been the old school HTML + Javascript most of us have done and RIA frameworks like Google Web Toolkit ("GWT") and Ext-GWT, which were a fairly low-stress entry into the Ajax world for someone from a Java background, such as myself. But, after all this, I find myself leaning towards the more PHP + Ajax type solution, which just seems that much more lightweight. So I've been looking into jQuery and I really like it's use of commands, the use of fluent interfaces and method chaining, it's cross-browser CSS selector superset, the fact that it's lightweight and extensible, the brevity of the syntax, unobtrusive Javascript and the plug-in framework. Now obviously many of these aren't unique to jQuery but on the basis that some things are greater than their sum of parts, it just seems that it all fits together and works well. So jQuery seems to have a lot going for it and it looks to the frontrunner for what I choose to concentrate on. Is there anything else I should be aware of or any particular reasons not to choose it or to choose something else? EDIT: Just wanted to add this trend comparison of Javascript frameworks.

    Read the article

  • Can you force a crash if a write occurs to a given memory location with finer than page granularity?

    - by Joseph Garvin
    I'm writing a program that for performance reasons uses shared memory (alternatives have been evaluated, and they are not fast enough for my task, so suggestions to not use it will be downvoted). In the shared memory region I am writing many structs of a fixed size. There is one program responsible for writing the structs into shared memory, and many clients that read from it. However, there is one member of each struct that clients need to write to (a reference count, which they will update atomically). All of the other members should be read only to the clients. Because clients need to change that one member, they can't map the shared memory region as read only. But they shouldn't be tinkering with the other members either, and since these programs are written in C++, memory corruption is possible. Ideally, it should be as difficult as possible for one client to crash another. I'm only worried about buggy clients, not malicious ones, so imperfect solutions are allowed. I can try to stop clients from overwriting by declaring the members in the header they use as const, but that won't prevent memory corruption (buffer overflows, bad casts, etc.) from overwriting. I can insert canaries, but then I have to constantly pay the cost of checking them. Instead of storing the reference count member directly, I could store a pointer to the actual data in a separate mapped write only page, while keeping the structs in read only mapped pages. This will work, the OS will force my application to crash if I try to write to the pointed to data, but indirect storage can be undesirable when trying to write lock free algorithms, because needing to follow another level of indirection can change whether something can be done atomically. Is there any way to mark smaller areas of memory such that writing them will cause your app to blow up? Some platforms have hardware watchpoints, and maybe I could activate one of those with inline assembly, but I'd be limited to only 4 at a time on 32-bit x86 and each one could only cover part of the struct because they're limited to 4 bytes. It'd also make my program painful to debug ;)

    Read the article

  • Need help choosing database server

    - by The Pretender
    Good day everyone. Recently I was given a task to develop an application to automate some aspects of stocks trading. While working on initial architecture, the database dilemma emerged. What I need is a fast database engine which can process huge amounts of data coming in very fast. I'm fairly experienced in general programming, but I never faced a task of developing a high-load database architecture. I developed a simple MSSQL database schema with several many-to-many relationships during one of my projects, but that's it. What I'm looking for is some advice on choosing the most suitable database engine and some pointers to various manuals or books which describe high-load database development. Specifics of the project are as follows: OS: Windows NT family (Server 2008 / 7) Primary platform: .NET with C# Database structure: one table to hold primary items and two or three tables with foreign keys to the first table to hold additional information. Database SELECT requirements: Need super-fast selection by foreign keys and by combination of foreign key and one of the columns (presumably DATETIME) Database INSERT requirements: The faster the better :) If there'll be significant performance gain, some parts can be written in C++ with managed interfaces to the rest of the system. So once again: given all that stuff I just typed, please give me some advice on what the best database for my project is. Links or references to some manuals and books on the subject are also greatly appreciated. EDIT: I'll need to insert 3-5 rows in 2 tables approximately once in 30-50 milliseconds and I'll need to do SELECT with 0-2 WHERE clauses queries with similar rate.

    Read the article

  • DNS-Based Environment Determination

    - by zvolkov
    Found the following here. The questions is: where can I find more details on how exactly implement this on Windows? Any guide or how-to anybody? Or maybe you can provide your invaluable suggestions? Specifically, how do I make so that "all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com" (I'm a dev, not a DNS specialist, but our IT Support has refused to help on this:() Use DNS Based Environment Determination for your servers. Do this by initially splitting your top level domain into a number of sub domains depending on their function, and then creating DNS Service Names in each of the sub domains pointing to the relevant server for that service. Based on the list above we would then have: * clientdb.prod.example.com for Production * clientdb.perf.example.com for Performance Testing * clientdb.qa.example.com for QA * clientdb.dev.example.com for Development Servers then resolve entries in their relevant sub domain by function. That is, all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com. This allows you to have a single configuration entry for your client database hostname (clientdb) that would resolve correctly in all environments. This technique has the added advantage of still having global services defined in a common top level domain. Here's one related (but not equivalent) SO question: http://stackoverflow.com/questions/774490/dns-resolving-based-on-client-ip This seems to be related to Providing "split horizon" DNS service. Reading that, I see that I will probably need separate DNS Server for each environment. Is this true or does Windows support some form of "tagging" the records to be visible depending on the requestor's IP? Also, cross-posted on ServerFault

    Read the article

  • Is Annotation in Javascript? If not, how to switch between debug/productive modes in declarative way

    - by Michael Mao
    Hi all: This is but a curious question. I cannot find any useful links from Google so it might be better to ask the gurus here. The point is: is there a way to make "annotation" in javascript source code so that all code snippets for testing purpose can be 'filtered out' when project is deployed from test field into the real environment? I know in Java, C# or some other languages, you can assign an annotation just above the function name, such as : // it is good to remove the annoying warning messages @SuppressWarnings("unchecked") public class Tester extends TestingPackage { ... } Basically I've got a lot of testing code that prints out something into FireBug console. I don't wanna manually "comment out" them because the guy that is going to maintain the code might not be aware of all the testing functions, so he/she might just miss one function and the whole thing can be brought down to its knees. One other thing, we might use a minimizer to "shrink" the source code into "human unreadable" code and boost up performance (just like jQuery.min), so trying to match testing section out of the mess is not possible for plain human eyes in the future. Any suggestion is much appreciated.

    Read the article

  • Mysqli query only works on localhost, not webserver

    - by whamo
    Hello. I have changed some of my old queries to the Mysqli framework to improve performance. Everything works fine on localhost but when i upload it to the webserver it outputs nothing. After connecting I check for errors and there are none. I also checked the php modules installed and mysqli is enabled. I am certain that it creates a connection to the database as no errors are displayed. (when i changed the database name string it gave the error) There is no output from the query on the webserver, which looks like this: $mysqli = new mysqli("server", "user", "password"); if (mysqli_connect_errno()) { printf("Can't connect Errorcode: %s\n", mysqli_connect_error()); exit; } // Query used $query = "SELECT name FROM users WHERE id = ?"; if ($stmt = $mysqli->prepare("$query")) { // Specify parameters to replace '?' $stmt->bind_param("d", $id); $stmt->execute(); // bind variables to prepared statement $stmt->bind_result($_userName); while ($stmt->fetch()) { echo $_userName; } $stmt-close(); } } //close connection $mysqli-close(); As I said this code works perfectly on my localserver just not online. Checked the error logs and there is nothing so everything points to a good connection. All the tables exists as well etc. Anyone any ideas because this one has me stuck! Also, if i get this working, will all my other queries still work? Or will i need to make them use the mysqli framework as well? Thanks in advance.

    Read the article

  • Is it possible that a single-threaded program is executed simultaneously on more than one CPU core?

    - by Wolfgang Plaschg
    When I run a single-threaded program that i have written on my quad core Intel i can see in the Windows Task Manager that actually all four cores of my CPU are more or less active. One core is more active than the other three, but there is also activity on those. There's no other program (besided the OS kernel of course) running that would be plausible for that activitiy. And when I close my program all activity an all cores drops down to nearly zero. All is left is a little "noise" on the cores, so I'm pretty sure all the visible activity comes directly or indirectly (like invoking system routines) from my program. Is it possible that the OS or the cores themselves try to balance some code or execution on all four cores, even it's not a multithreaded program? Do you have any links that documents this technique? Some infos to the program: It's a console app written in Qt, the Task Manager states that only one thread is running. Maybe Qt uses threads, but I don't use signals or slots, nor any GUI. Link to Task Manager screenshot: http://img97.imageshack.us/img97/6403/taskmanager.png This question is language agnostic and not tied to Qt/C++, i just want to know if Windows or Intel do to balance also single-threaded code on all cores. If they do, how does this technique work? All I can think of is, that kernel routines like reading from disk etc. is scheduled on all cores, but this won't improve performance significantly since the code still has to run synchronous to the kernel api calls. EDIT Do you know any tools to do a better analysis of single and/or multi-threaded programs than the poor Windows Task Manager?

    Read the article

  • java time check API

    - by ring bearer
    I am sure this was done 1000 times in 1000 different places. The question is I want to know if there is a better/standard/faster way to check if current "time" is between two time values given in hh:mm:ss format. For example, my big business logic should not run between 18:00:00 and 18:30:00. So here is what I had in mind: public static boolean isCurrentTimeBetween(String starthhmmss, String endhhmmss) throws ParseException{ DateFormat hhmmssFormat = new SimpleDateFormat("yyyyMMddhh:mm:ss"); Date now = new Date(); String yyyMMdd = hhmmssFormat.format(now).substring(0, 8); return(hhmmssFormat.parse(yyyMMdd+starthhmmss).before(now) && hhmmssFormat.parse(yyyMMdd+endhhmmss).after(now)); } Example test case: String doNotRunBetween="18:00:00,18:30:00";//read from props file String[] hhmmss = downTime.split(","); if(isCurrentTimeBetween(hhmmss[0], hhmmss[1])){ System.out.println("NOT OK TO RUN"); }else{ System.out.println("OK TO RUN"); } What I am looking for is code that is better in performance in looks in correctness What I am not looking for third-party libraries Exception handling debate variable naming conventions method modifier issues

    Read the article

  • High-concurrency counters without sharding

    - by dound
    This question concerns two implementations of counters which are intended to scale without sharding (with a tradeoff that they might under-count in some situations): http://appengine-cookbook.appspot.com/recipe/high-concurrency-counters-without-sharding/ (the code in the comments) http://blog.notdot.net/2010/04/High-concurrency-counters-without-sharding My questions: With respect to #1: Running memcache.decr() in a deferred, transactional task seems like overkill. If memcache.decr() is done outside the transaction, I think the worst-case is the transaction fails and we miss counting whatever we decremented. Am I overlooking some other problem that could occur by doing this? What are the significiant tradeoffs between the two implementations? Here are the tradeoffs I see: #2 does not require datastore transactions. To get the counter's value, #2 requires a datastore fetch while with #1 typically only needs to do a memcache.get() and memcache.add(). When incrementing a counter, both call memcache.incr(). Periodically, #2 adds a task to the task queue while #1 transactionally performs a datastore get and put. #1 also always performs memcache.add() (to test whether it is time to persist the counter to the datastore). Conclusions (without actually running any performance tests): #1 should typically be faster at retrieving a counter (#1 memcache vs #2 datastore). Though #1 has to perform an extra memcache.add() too. However, #2 should be faster when updating counters (#1 datastore get+put vs #2 enqueue a task). On the other hand, with #1 you have to be a bit more careful with the update interval since the task queue quota is almost 100x smaller than either the datastore or memcahce APIs.

    Read the article

  • Which non-clustered index should I use?

    - by Junior Mayhé
    Here I am studying nonclustered indexes on SQL Server Management Studio. I've created a table with more than 1 million records. This table has a primary key. CREATE TABLE [dbo].[Customers]( [CustomerId] [int] IDENTITY(1,1) NOT NULL, [CustomerName] [varchar](100) NOT NULL, [Deleted] [bit] NOT NULL, [Active] [bit] NOT NULL, CONSTRAINT [PK_Customers] PRIMARY KEY CLUSTERED ( [CustomerId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] This is the query I'll be using to see what execution plan is showing: SELECT CustomerName FROM Customers Well, executing this command with no additional non-clustered index, it leads the execution plan to show me: I/O cost = 3.45646 Operator cost = 4.57715 Now I'm trying to see if it's possible to improve performance, so I've created a non-clustered index for this table: 1) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerID_CustomerName] ON [dbo].[Customers] ( [CustomerId] ASC, [CustomerName] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO Executing again the select against Customers table, the execution plan shows me: I/O cost = 2.79942 Operator cost = 3.92001 It seems better. Now I've deleted this just created non-clustered index, in order to create a new one: 2) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerIDIncludeCustomerName] ON [dbo].[Customers] ( [CustomerId] ASC ) INCLUDE ( [CustomerName]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this new non-clustered index, I've executed the select statement again and the execution plan shows me the same result: I/O cost = 2.79942 Operator cost = 3.92001 So, which non-clustered index should I use? Why the costs are the same on execution plan for I/O and Operator? Am I doing something wrong or this is expected? thank you

    Read the article

  • iPhone OpenGL and NSTimer issues

    - by Kyle
    I have an NSTimer that runs at 60hz. With an OpenGL scene loaded and rendering, my game can get 60fps, solid, all day long.. Then if I go and recompile the app, or reload it, it will get 40fps. Same resources loaded. I've been running into this problem for years, and I just want to know why. It's crazy, and I want to know if I should just abandon this stupid Timer. Conditions are not different on my 3GS between loads. It will just get 40fps sometimes. Obviously the clockrate is not different between loads, so the performance figures should be constant given a constant scene. Here is a log of my framerates: A good load: :-) FrameRate: 61 FrameRate: 61 FrameRate: 61 FrameRate: 60 FrameRate: 60 FrameRate: 61 FrameRate: 60 FrameRate: 60 FrameRate: 61 FrameRate: 60 FrameRate: 61 Now, I'll go ahead and do nothing, recompile, and run: FrameRate: 43 FrameRate: 50 FrameRate: 45 FrameRate: 48 FrameRate: 40 FrameRate: 45 FrameRate: 42 FrameRate: 41 FrameRate: 42 FrameRate: 44 FrameRate: 41 FrameRate: 46 ^- Massive difference visually. What the flying heck could cause this? SAME area of the scene, SAME camera setup. No variables are different.

    Read the article

  • Best practices for Java logging from multiple threads?

    - by Jason S
    I want to have a diagnostic log that is produced by several tasks managing data. These tasks may be in multiple threads. Each task needs to write an element (possibly with subelements) to the log; get in and get out quickly. If this were a single-task situation I'd use XMLStreamWriter as it seems like the best match for simplicity/functionality without having to hold a ballooning XML document in memory. But it's not a single-task situation, and I'm not sure how to best make sure this is "threadsafe", where "threadsafe" in this application means that each log element should be written to the log correctly and serially (one after the other and not interleaved in any way). Any suggestions? I have a vague intuition that the way to go is to use a queue of log elements (with each one able to be produced quickly: my application is busy doing real work that's performance-sensitive), and have a separate thread which handles the log elements and sends them to a file so the logging doesn't interrupt the producers. The logging doesn't necessarily have to be XML, but I do want it to be structured and machine-readable. edit: I put "threadsafe" in quotes. Log4j seems to be the obvious choice (new to me but old to the community), why reinvent the wheel...

    Read the article

  • how to push different local git branches to heroku/master

    - by lsiden
    Heroku has a policy of ignoring all branches but 'master'. While I'm sure Heroku's designers have excellent reasons for for this policy (I'm guessing for storage and performance optimization), the consequence to me as a developer is that whatever local topic branch I may be working on, I would like an easy way to switch Heroku's master to that local topic branch and do a "git push heroku -f" to over-write master on Heroku. What I got from reading the "Pushing Refspecs" section of http://progit.org/book/ch9-5.html is git push -f heroku local-topic-branch:refs/heads/master What I'd really like is a way to set this up in the config file so that "git push heroku" always does the above, replacing local-topic-branch with the name of whatever my current branch happens to be. If anyone knows how to accomplish that, please let me know! The caveat for this, of course, is that this is only sensible if I am the only one who can push to that Heroku app/repository. A test or QA team might manage such a repository to try out different candidate branches, but they would have to coordinate so that they all agree on what branch they are pushing to it on any given day. Needless to say, it would also be a very good idea to have a separate remote repository (like Github) without this restriction for backing everything up to. I'd call that one "origin" and use "heroku" for Heroku so that "git push" always backs up everything to origin, and "git push heroku" pushes whatever branch I'm currently on to Heroku's master branch, overwriting it if necessary. Can anybody tell me if this would work? [remote "heroku"] url = [email protected]:my-app.git push = +refs/heads/*:refs/heads/master I'd like to hear from someone more experienced before I begin to experiment, although I suppose I could create a dummy app on Heroku and experiment with that. As for fetching, I don't really care if the Heroku repository is write-only. I still have a separate repository, like Github, for backup and cloning of all my work. Footnote: This question is similar to, but not quite the same as http://stackoverflow.com/questions/1489393/good-git-deployment-using-branches-strategy-with-heroku

    Read the article

  • Users in database server or database tables

    - by Batcat
    Hi all, I came across an interesting issue about client server application design. We have this browser based management application where it has many users using the system. So obvisously within that application we have an user management module within it. I have always thought having an user table in the database to keep all the login details was good enough. However, a senior developer said user management should be done in the database server layer if not then is poorly designed. What he meant was, if a user wants to use the application then a user should be created in the user table AND in the database server as a user account as well. So if I have 50 users using my applications, then I should have 50 database server user logins. I personally think having just one user account in the database server for this database was enough. Just grant this user with the allowed privileges to operate all the necessary operation need by the application. The users that are interacting with the application should have their user accounts created and managed within the database table as they are more related to the application layer. I don't see and agree there is need to create a database server user account for every user created for the application in the user table. A single database server user should be enough to handle all the query sent by the application. Really hope to hear some suggestions / opinions and whether I'm missing something? performance or security issues? Thank you very much.

    Read the article

  • Executing remote script - Architecture

    - by Bruno Lee
    Hi, I want to make an application that executes a remote script. The user can create a script (probabily a LUA script) then stores it in the server. Then he can uses an API for execute the script. I was thinking that API could be a webservice. So my questions are: I need high performance to execute the script. So my first choice was LUA script. Someone has another sugestion? Cause I need high perfomance, I was thinking if the webservice is the best solution. Maybe I could create a TCP/IP Windows Service that hold the users request. It is important to say that I will have many user executing scripts at the same time. So I will have a concurrency problem. My scripts will query in a database. I will use Tokyo Cabinet or Tokio Tyrant. I think Tokio Tyrant is the only solution cause I will have many requests. For perfomance, Do I need to make a connection pooling? Is there anyway to share variables between webservices requests? To make the webservice or the Windows service i was thinking to use C++. Can someone help with these questions? thanks

    Read the article

  • Sorting GridView Formed With Data Set

    - by nani
    Following Code is for Sorting GridView Formed With DataSet Source: http://www.highoncoding.com/Articles/176_Sorting_GridView_Manually_.aspx But it is not displaying any output. There is no problem in sql connection. I am unable to trace the error, please help me. Thank You. public partial class _Default : System.Web.UI.Page { private const string ASCENDING = " ASC"; private const string DESCENDING = " DESC"; private DataSet GetData() { SqlConnection cnn = new SqlConnection("Server=localhost;Database=Northwind;Trusted_Connection=True;"); SqlDataAdapter da = new SqlDataAdapter("SELECT TOP 5 firstname,lastname,hiredate FROM EMPLOYEES", cnn); DataSet ds = new DataSet(); da.Fill(ds); return ds; } public SortDirection GridViewSortDirection { get { if (ViewState["sortDirection"] == null) ViewState["sortDirection"] = SortDirection.Ascending; return (SortDirection)ViewState["sortDirection"]; } set { ViewState["sortDirection"] = value; } } protected void GridView1_Sorting(object sender, GridViewSortEventArgs e) { string sortExpression = e.SortExpression; if (GridViewSortDirection == SortDirection.Ascending) { GridViewSortDirection = SortDirection.Descending; SortGridView(sortExpression, DESCENDING); } else { GridViewSortDirection = SortDirection.Ascending; SortGridView(sortExpression, ASCENDING); } } private void SortGridView(string sortExpression, string direction) { // You can cache the DataTable for improving performance DataTable dt = GetData().Tables[0]; DataView dv = new DataView(dt); dv.Sort = sortExpression + direction; GridView1.DataSource = dv; GridView1.DataBind(); } } aspx page asp:GridView ID="GridView1" runat="server" AllowSorting="True" OnSorting="GridView1_Sorting" /asp:GridView

    Read the article

  • What is best practice (and implications) for packaging projects into JAR's?

    - by user245510
    What is considered best practice deciding how to define the set of JAR's for a project (for example a Swing GUI)? There are many possible groupings: JAR per layer (presentation, business, data) JAR per (significant?) GUI panel. For significant system, this results in a large number of JAR's, but the JAR's are (should be) more re-usable - fine-grained granularity JAR per "project" (in the sense of an IDE project); "common.jar", "resources.jar", "gui.jar", etc I am an experienced developer; I know the mechanics of creating JAR's, I'm just looking for wisdom on best-practice. Personally, I like the idea of a JAR per component (e.g. a panel), as I am mad-keen on encapsulation, and the holy-grail of re-use accross projects. I am concerned, however, that on a practical, performance level, the JVM would struggle class loading over dozens, maybe hundreds of small JAR's. Each JAR would contain; the GUI panel code, necessary resources (i.e. not centralised) so each panel can stand alone. Does anyone have wisdom to share?

    Read the article

  • Struggling with a data modeling problem

    - by rpat
    I am struggling with a data model (I use MySQL for the database). I am uneasy about what I have come up with. If someone could suggest a better approach, or point me to some reference matter I would appreciate it. The data would have organizations of many types. I am trying to do a 3 level classification (Class, Category, Type). Say if I have 'Italian Restaurant', it will have the following classification Food Services Restaurants Italian However, an organization may belong to multiple groups. A restaurant may also serve Chinese and Italian. So it will fit into 2 classifications Food Services Restaurants Italian Food Services Restaurants Chinese The classification reference tables would be like the following: ORG_CLASS (RowId, ClassCode, ClassName) 1, FOOD, Food Services ORG_CATEGORY(RowId, ClassCode, CategoryCode, CategoryName) 1, FOOD, REST, Restaurants ORG_TYPE (RowId, ClassCode, CategoryCode, TypeCode, TypeName) 100, FOOD, REST, ITAL, Italian 101, FOOD, REST, CHIN, Chinese 102, FOOD, REST, SPAN, Spanish 103, FOOD, REST, MEXI, Mexican 104, FOOD, REST, FREN, French 105, FOOD, REST, MIDL, Middle Eastern The actual data tables would be like the following: I will allow an organization a max of 3 classifications. I will have 3 GroupIds each pointing to a row in ORG_TYPE. So I have my ORGANIZATION_TABLE ORGANIZATION_TABLE (OrgGroupId1, OrgGroupId2, OrgGroupId3, OrgName, OrgAddres) 100,103,NULL,MyRestaurant1, MyAddr1 100,102,NULL,MyRestaurant2, MyAddr2 100,104,105, MyRestaurant3, MyAddr3 During data add, a dialog could let the user choose the clssa, category, type and the corresponding GroupId could be populated with the rowid from the ORG_TYPE table. During Search, If all three classification are chosen, It will be more specific. For example, if Food Services Restaurants Italian is the criteria, the where clause would be 'where OrgGroupId1 = 100' If only 2 levels are chosen Food Services Restaurants I have to do 'where OrgGroupId1 in (100,101,102,103,104,105, .....)' - There could be a hundred in that list I will disallow class level search. That is I will force selection of a class and category The Ids would be integers. I am trying to see performance issues and other issues. Overall, would this work? or I need to throw this out and start from scratch.

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >