Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 481/537 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • Unable to get data from a WCF client

    - by Scott
    I am developing a DLL that will provide sychronized time stamps to multiple applications running on the same machine. The timestamps are altered in a thread that uses a high performance timer and a scalar to provide the appearance of moving faster than real-time. For obvious reasons I want only 1 instance of this time library, and I thought I could use WCF for the other processes to connect to this and poll for timestamps whenever they want. When I connect however I never get a valid time stamp, just an empty DateTime. I should point out that the library does work. The original implementation was a single DLL that each application incorporated and each one was synced using windows messages. I'm fairly sure it has something to do with how I'm setting up the WCF stuff, to which I am still pretty new. Here are the contract definitions: public interface ITimerCallbacks { [OperationContract(IsOneWay = true)] void TimerElapsed(String id); } [ServiceContract(SessionMode = SessionMode.Required, CallbackContract = typeof(ITimerCallbacks))] public interface ISimTime { [OperationContract] DateTime GetTime(); } Here is my class definition: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)] public class SimTimeServer: ISimTime The host setup: // set up WCF interprocess comms host = new ServiceHost(typeof(SimTimeServer), new Uri[] { new Uri("net.pipe://localhost") }); host.AddServiceEndpoint(typeof(ISimTime), new NetNamedPipeBinding(), "SimTime"); host.Open(); and the implementation of the interface function server-side: public DateTime GetTime() { if (ThreadMutex.WaitOne(20)) { RetTime = CurrentTime; ThreadMutex.ReleaseMutex(); } return RetTime; } Lastly the client-side implementation: Callbacks myCallbacks = new Callbacks(); DuplexChannelFactory pipeFactory = new DuplexChannelFactory(myCallbacks, new NetNamedPipeBinding(), new EndpointAddress("net.pipe://localhost/SimTime")); ISimTime pipeProxy = pipeFactory.CreateChannel(); while (true) { string str = Console.ReadLine(); if (str.ToLower().Contains("get")) Console.WriteLine(pipeProxy.GetTime().ToString()); else if (str.ToLower().Contains("exit")) break; }

    Read the article

  • HTML5 audio object doesn't play on iPad (when called from a setTimeout)

    - by Dan Halliday
    I have a page with a hidden <audio> object which is being started and stopped using a custom button via javascript. (The reason being I want to customise the button, and that drawing an audio player seems to destroy rendering performance on iPad anyway). A simplified example (in coffeescript): // Works fine on all browsers constructor: (@_button, @_audio) -> @_button.on 'click', @_play // Bind button's click event with jQuery _play: (e) => @_audio[0].play() // Call play() on audio element The audio plays fine when triggered from a function bound to a click event, but I actually want an animation to complete before the file plays so I put .play() inside a setTimeout. However I just can't get this to work: // Will not play on iPad constructor: (@_button, @_audio) -> @_button.on 'click', @_play // Bind button's click event with jQuery _play: (e) => setTimeout (=> // Declare a 300ms timeout @_audio[0].play() // Call play() on audio element ), 300 I've checked that @_audio (this._audio) is in scope and that its play() method exists. Why doesn't this work on iPad?

    Read the article

  • How good is the memory mapped Circular Buffer on Wikipedia?

    - by abroun
    I'm trying to implement a circular buffer in C, and have come across this example on Wikipedia. It looks as if it would provide a really nice interface for anyone reading from the buffer, as reads which wrap around from the end to the beginning of the buffer are handled automatically. So all reads are contiguous. However, I'm a bit unsure about using it straight away as I don't really have much experience with memory mapping or virtual memory and I'm not sure that I fully understand what it's doing. What I think I understand is that it's mapping a shared memory file the size of the buffer into memory twice. Then, whenever data is written into the buffer it appears in memory in 2 places at once. This allows all reads to be contiguous. What would be really great is if someone with more experience of POSIX memory mapping could have a quick look at the code and tell me if the underlying mechanism used is really that efficient. Am I right in thinking for example that the file in /dev/shm used for the shared memory always stays in RAM or could it get written to the hard drive (performance hit) at some point? Are there any gotchas I should be aware of? As it stands, I'm probably going to use a simpler method for my current project, but it'd be good to understand this to have it in my toolbox for the future. Thanks in advance for your time.

    Read the article

  • Testing approach for multi-threaded software

    - by Shane MacLaughlin
    I have a piece of mature geospatial software that has recently had areas rewritten to take better advantage of the multiple processors available in modern PCs. Specifically, display, GUI, spatial searching, and main processing have all been hived off to seperate threads. The software has a pretty sizeable GUI automation suite for functional regression, and another smaller one for performance regression. While all automated tests are passing, I'm not convinced that they provide nearly enough coverage in terms of finding bugs relating race conditions, deadlocks, and other nasties associated with multi-threading. What techniques would you use to see if such bugs exist? What techniques would you advocate for rooting them out, assuming there are some in there to root out? What I'm doing so far is running the GUI functional automation on the app running under a debugger, such that I can break out of deadlocks and catch crashes, and plan to make a bounds checker build and repeat the tests against that version. I've also carried out a static analysis of the source via PC-Lint with the hope of locating potential dead locks, but not had any worthwhile results. The application is C++, MFC, mulitple document/view, with a number of threads per doc. The locking mechanism I'm using is based on an object that includes a pointer to a CMutex, which is locked in the ctor and freed in the dtor. I use local variables of this object to lock various bits of code as required, and my mutex has a time out that fires my a warning if the timeout is reached. I avoid locking where possible, using resource copies where possible instead. What other tests would you carry out?

    Read the article

  • Undefined Web.config error in VS 2008

    - by user1066050
    I'm working on a web app using VS 2008, .Net 3.5 and C#. Most of the projects in the solution are either classic asp.net pages with some MVC 1 in the mix, the rest is shared libraries. The solution is one that is some 5 years old and has gone through a variety of developers working on it and clearly has some performance and architectural issues. Previously, I've been working on the project using VS 2008 on a Win XP machine, but have just transitioned over to a new box using Win 7 Ultimate. To do so, I've installed VS 2008, asp.net 3.5. To support future work on the solution I've also installed VS 2010 and asp.net 4.0. Opening the solution on the new box with VS 2008 works fine, and it builds without error. However, when I attempt to run it with the debugger, I get the following message: "There is an error in web.config. Please correct before proceeding. (You might rename the current web.config and add a new one.)" I think it's clear that there is some sort of environmental issue regarding web.config on the new machine, but the error message is not "helpful". Adding a new web.config is not an option as the existing one is quite long and involved (too much to post here). I'm hoping someone has a suggestion or two about where I might look for missing elements or changed configurations that might produce such an error message. Lacking that, I'll revisit this post and provide the web.config in the hope that will elicit further help. Thanks to all in advance for taking a look at this. The StackOverflow community has helped me many times in the past with pertinent answers although this is my first posting. Jeff

    Read the article

  • Why are my connections not closed even if I explicitly dispose of the DataContext?

    - by Chris Simpson
    I encapsulate my linq to sql calls in a repository class which is instantiated in the constructor of my overloaded controller. The constructor of my repository class creates the data context so that for the life of the page load, only one data context is used. In my destructor of the repository class I explicitly call the dispose of the DataContext though I do not believe this is necessary. Using performance monitor, if I watch my User Connections count and repeatedly load a page, the number increases once per page load. Connections do not get closed or reused (for about 20 minutes). I tried putting Pooling=false in my config to see if this had any effect but it did not. In any case with pooling I wouldn't expect a new connection for every load, I would expect it to reuse connections. I've tried putting a break point in the destructor to make sure the dispose is being hit and sure enough it is. So what's happening? Some code to illustrate what I said above: The controller: public class MyController : Controller { protected MyRepository rep; public MyController () { rep = new MyRepository(); } } The repository: public class MyRepository { protected MyDataContext dc; public MyRepository() { dc = getDC(); } ~MyRepository() { if (dc != null) { //if (dc.Connection.State != System.Data.ConnectionState.Closed) //{ // dc.Connection.Close(); //} dc.Dispose(); } } // etc } Note: I add a number of hints and context information to the DC for auditing purposes. This is essentially why I want one connection per page load

    Read the article

  • Heavy Mysql operation & Time Constraints [closed]

    - by Rahul Jha
    There is a performance issue where that I have stuck with my application which is based on PHP & MySql. The application is for Data Migration where data has to be uploaded and after various processes (Cleaning from foreign characters, duplicate check, id generation) it has to be inserted into one central table and then to 5 different tables. There, an id is generated and that id has to be updated to central table. There are different sets of records and validation rules. The problem I am facing is that when I insert say(4K) rows file (containing 20 columns) it is working fine within 15 min it gets inserted everywhere. But, when I insert the same records again then at this time it is taking one hour to insert (ideally it should get inserted by marking earlier inserted data as duplicate). After going through the log file, I noticed is that there is a Mysql select statement where I am checking the duplicates and getting ID which are duplicates. Then I am calling a function inside for loop which is basically inserting records into 5 tables and updates id to central table. This Calling function is major time of whole process. P.S. The records has to be inserted record by record.. Kindly Suggest some solution.. //This is that sample code $query=mysql_query("SELECT DISTINCT p1.ID FROM table1 p1, table2 p2, table3 a WHERE p2.datatype =0 AND (p1.datatype =1 || p1.datatype=2) AND p2.ID =0 AND p1.ID = a.ID AND p1.coulmn1 = p2.column1 AND p1.coulmn2 = p2.coulmn2 AND a.coulmn3 = p2.column3"); $num=mysql_num_rows($query); for($i=0;$i<$num;$i++) { $f=mysql_result($query,$i,"ID"); //calling function RecordInsert($f); }

    Read the article

  • OptimisticLockException in inner transaction ruins outer transaction

    - by Pace
    I have the following code (OLE = OptimisticLockException)... public void outer() { try { middle() } catch (OLE) { updateEntities(); outer(); } } @Transactional public void middle() { try { inner() } catch (OLE) { updateEntities(); middle(); } @Transactional public void inner() { //Do DB operation } inner() is called by other non-transactional methods which is why both middle() and inner() are transactional. As you can see, I deal with OLEs by updating the entities and retrying the operation. The problem I'm having is that when I designed things this way I was assuming that the only time one could get an OLE was when a transaction closed. This is apparently not the case as the call to inner() is throwing an OLE even when the stack is outer()->middle()->inner(). Now, middle() is properly handling the OLE and the retry succeeds but when it comes time to close the transaction it has been marked rollbackOnly by Spring. When the middle() method call finally returns the closing aspect throws an exception because it can't commit a transaction marked rollbackOnly. I'm uncertain what to do here. I can't clear the rollbackOnly state. I don't want to force create a transaction on every call to inner because that kills my performance. Am I missing something or can anyone see a way I can structure this differently? EDIT: To clarify what I'm asking, let me explain my main question. Is it possible to catch and handle OLE if you are inside of an @Transactional method? FYI: The transaction manager is a JpaTransactionManager and the JPA provider is Hibernate.

    Read the article

  • Detaching all entities of T to get fresh data

    - by Goran
    Lets take an example where there are two type of entites loaded: Product and Category, Product.CategoryId - Category.Id. We have available CRUD operations on products (not Categories). If on another screen Categories are updated (or from another user in the network), we would like to be able to reload the Categories, while preserving the context we currently use, since we could be in the middle of editing data, and we do not want changes to be lost (and we cannot depend on saving, since we have incomplete data). Since there is no easy way to tell EF to get fresh data (added, removed and modified), we thought of twp possible ways: 1) Getting products attached to context, and categories detached from context. This would mean that we loose the ability to access Product.Category.Name, which we do sometimes require, so we would need to manually resolve it (example when printing data). 2) detaching / attaching all Categories from current context. Context.ChangeTracker.Entries().Where(x => x.Entity.GetType() == typeof(T)).ForEach(x => x.State = EntityState.Detached); And then reload the categories, which will get fresh data. Do you find any problem with this second approach? We understand that this will require all constraints to be put on foreign keys, and not navigation properties, since when detaching all Categories, Product.Category navigation properties would be reset to null also. Also, there could be a potential performance problem, which we did not test, since there could be couple of thousand products loaded, and all would need to resolve navigation property when reloading. Which of the two do you prefer, and is there a better way (EF6 + .NET 4.0)?

    Read the article

  • Pump Messages During Long Operations + C#

    - by Newbie
    Hi I have a web service that is doing huge computation and is taking more than a minute. I have generated the proxy file of the web service and then from my client end I am using the dll(of course I generated the proxy dll). My client side code is TimeSeries3D t = new TimeSeries3D(); int portfolioId = 4387919; string[] str = new string[2]; str[0] = "MKT_CAP"; DateRange dr = new DateRange(); dr.mStartDate = DateTime.Today; dr.mEndDate = DateTime.Today; Service1 sc = new Service1(); t = sc.GetAttributesForPortfolio(portfolioId, true, str, dr); But since it is taking to much time for the server to compute, after 1 minute I am receiving an error message The CLR has been unable to transition from COM context 0x33caf30 to COM context 0x33cb0a0 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. Kindly guide me what to do? Thanks

    Read the article

  • Index an array expression directly in PostgreSQL

    - by wich
    I'm trying to insert data into a table from a template table. I need to rewrite one of the columns for which I wanted to use a directly indexed array expression, but I can't seem to find how to do this, if it is even possible. The scenario: create table template ( id integer, index integer, foo integer); insert into template values (0, 1, 23), (0, 2, 18), (0, 3, 16), (0, 4, 7), (1, 1, 17), (1, 2, 26), (1, 3, 11), (1, 4, 3); create table data ( data_id integer, foo integer); Now what I'd like to do is the following: insert into data select (array[3,7,5,2])[index], foo from template where id = 1; But this doesn't work, the (array[3,7,5,2])[index] syntax isn't valid. I tried a few variants, but was unable to get anything working and wasn't able to find the correct syntax in the docs, nor even whether this is at all possible or not. As a current workaround I've devised the following, but it is less than ideal, from an elegance perspective at least, but it may also be a performance hit, I haven't looked into that yet. insert into data select arr[index], foo from template, (select array[3,7,5,2] as arr) as q where id = 1; If anyone could suggest a (better) alternative to accomplish this I'd like to hear that as well.

    Read the article

  • How do repos (SVN, GIT) work?

    - by masfenix
    I read SO nearly everyday and mostly there is a thread about source control. I have a few questions. I am going to use SVN as example. 1) There is a team (small, large dosnt matter). In the morning everyone checks out the code to start working. At noon Person A commits, while person B still works on it. What happens when person B commits? how will person B know that there is an updated file? 2) I am assuming the answer to the first question is "run an update command which tells you", ok so person B finds out that the file they have been working on all morning in changed. When they see the udpated file, it seems like person A has REWRITTEN the file for better performance. What does person B do? Seems like there whole day was a waste of time. Or if they commit their version then its a waste of person A's time? 3) What are branches? thanks, and if anyone knows a laymen terms pdf or something that explains it that would be awesome.

    Read the article

  • opengl shader make color "disappear"

    - by JFoulkes
    hi, I'm new to opengl and shaders. I'm trying to do some augmented reality on the iphone and messing about with shaders to alter a feed from the camera. What I'm trying to achieve is the appearance that an object in a picture has disappeared by setting the color to match the surrounding colour. I have a yellow rectangle and in it is a small red circle. I want to give the impressed the red circle has disappeared by setting the colour to be yellow. It won't always be solid colours but I'm just trying to get the basics down first. Currently I have a simple shader which will make a red colour lighter but this isn't ideal because it doesn't get close to the surrounding colour and I want this to work for different coloured objects and different coloured surrounding. I'm not even 100% shaders are what I need to be looking at or even opengl. I'm using it because of the performance it gives on the iPhone. I'm basically asking if: Anyone has done or seen anything similar Am I barking up the wrong tree using opengl es and opengl sl? Is this even possible? Cheers.

    Read the article

  • Bash PATH: How long is too long?

    - by ajwood
    Hi, I'm currently designing a software quarantine pattern to use on Ubuntu. I'm not sure how standard "quarantine" is in this context, so here is what I hope to accomplish... Inside a particular quarantine is all of the stuff one needs to run an application (bin, share, lib, etc.). Ideally, the quarantine has no leaks, which means it's not relying on any code outside of itself on the system. A quarantine can be defined as a set of executables (and some environment settings needed to make them run). I think it will be beneficial to separate the built packages enough such that upgrading to a newer version of the quarantine won't require rebuilding the whole thing. I'll be able to update just a few packages, and then the new quarantine can use some of old parts and some of the new parts. One issue I'm wondering about is the environment variables I'll be setting up to use a particular quarantines. Is there a hard limit on how big PATH can be? (either in number of characters, or in the number of directories it contains) Might a path be so long that it affects performance? Thanks very much, Andrew p.s. Any other wisdom that might help my design would be greatly appreciated :)

    Read the article

  • Invoking a method overloaded where all arguments implement the same interface

    - by double07
    Hello, My starting point is the following: - I have a method, transform, which I overloaded to behave differently depending on the type of arguments that are passed in (see transform(A a1, A a2) and transform(A a1, B b) in my example below) - All these arguments implement the same interface, X I would like to apply that transform method on various objects all implementing the X interface. What I came up with was to implement transform(X x1, X x2), which checks for the instance of each object before applying the relevant variant of my transform. Though it works, the code seems ugly and I am also concerned of the performance overhead for evaluating these various instanceof and casting. Is that transform the best I can do in Java or is there a more elegant and/or efficient way of achieving the same behavior? Below is a trivial, working example printing out BA. I am looking for examples on how to improve that code. In my real code, I have naturally more implementations of 'transform' and none are trivial like below. public class A implements X { } public class B implements X { } interface X { } public A transform(A a1, A a2) { System.out.print("A"); return a2; } public A transform(A a1, B b) { System.out.print("B"); return a1; } // Isn't there something better than the code below??? public X transform(X x1, X x2) { if ((x1 instanceof A) && (x2 instanceof A)) { return transform((A) x1, (A) x2); } else if ((x1 instanceof A) && (x2 instanceof B)) { return transform((A) x1, (B) x2); } else { throw new RuntimeException("Transform not implemented for " + x1.getClass() + "," + x2.getClass()); } } @Test public void trivial() { X x1 = new A(); X x2 = new B(); X result = transform(x1, x2); transform(x1, result); }

    Read the article

  • Flex profiling - what is [enterFrameEvent] doing?

    - by Herms
    I've been tasked with finding (and potentially fixing) some serious performance problems with a Flex application that was delivered to us. The application will consistently take up 50 to 100% of the CPU at times when it is simply idling and shouldn't be doing anything. My first step was to run the profiler that comes with FlexBuilder. I expected to find some method that was taking up most of the time, showing me where the bottleneck was. However, I got something unexpected. The top 4 methods were: [enterFrameEvent] - 84% cumulative, 32% self time [reap] - 20% cumulative and self time [tincan] - 8% cumulative and self time global.isNaN - 4% cumulative and self time All other methods had less than 1% for both cumulative and self time. From what I've found online, the [bracketed methods] are what the profiler lists when it doesn't have an actual Flex method to show. I saw someone claim that [tincan] is the processing of RTMP requests, and I assume [reap] is the garbage collector. Does anyone know what [enterFrameEvent] is actually doing? I assume it's essentially the "main" function for the event loop, so the high cumulative time is expected. But why is the self time so high? What's actually going on? I didn't expect the player internals to be taking up so much time, especially since nothing is actually happening in the app (and there are no UI updates going on). Is there any good way to find dig into what's happening? I know something is going on that shouldn't be (it looks like there must be some kind of busy wait or other runaway loop), but the profiler isn't giving me any results that I was expecting. My next step is going to be to start adding debug trace statements in various places to try and track down what's actually happening, but I feel like there has to be a better way.

    Read the article

  • How can I share an entity framework model across website users

    - by richardmoss
    Hello, Currently my website is based around MVC and the Entity Framework running against a SQL Server 2005 database. So far, it has all been running very smoothly, and I really enjoy MVC and its slimmer more concise code (and no huge viewstates or soul destroying postbacks ;)) Recently I was working on upgrading the site to use a simple forum system, and this is where I started running into problems. When I was testing the site using two different browsers, if I created or replied to a post in one browser, the other browser couldn't see the post. At the moment, each visitor to the site gets their own copy of the entity model, which I store in their session data. Obviously this is the problem as updates to one model aren't getting carried to the other. As a test, I tried storing a single copy of the model which all visitors would access by assigning the model to a static variable. This worked, and both browsers could see each others modifications. However, it had its side effects. For example, if I fired up both browsers at the same time and the model was initialized, one browser would crash, and the other would work fine, despite me using a locking object so in theory one of them should have been delayed until the model was ready (of course I could have implemented this wrong ;)). Also, originally this site did use one model for all visitors and when it was live, it frequently shut down - killing the IIS application pool while it did. Now I'm not sure if this was related, but I don't really want to reintroduce whatever bug I had that caused this shut down. So, my question is a simple one really - what is the best way of either using the same model for all website users so they all see updates, or if they do have separate copies (which I imagine will have a performance impact in time) how can the models detect changes in the database and update themselves according. Thanks in advance for any advice! Regards; Richard Moss

    Read the article

  • Why doesn't java.lang.Number implement Comparable?

    - by Julien Chastang
    Does anyone know why java.lang.Number does not implement Comparable? This means that you cannot sort Numbers with Collections.sort which seems to me a little strange. Post discussion update: Thanks for all the helpful responses. I ended up doing some more research about this topic. The simplest explanation for why java.lang.Number does not implement Comparable is rooted in mutability concerns. For a bit of review, java.lang.Number is the abstract super-type of AtomicInteger, AtomicLong, BigDecimal, BigInteger, Byte, Double, Float, Integer, Long and Short. On that list, AtomicInteger and AtomicLong to do not implement Comparable. Digging around, I discovered that it is not a good practice to implement Comparable on mutable types because the objects can change during or after comparison rendering the result of the comparison useless. Both AtomicLong and AtomicInteger are mutable. The API designers had the forethought to not have Number implement Comparable because it would have constrained implementation of future subtypes. Indeed, AtomicLong and AtomicInteger were added in Java 1.5 long after java.lang.Number was initially implemented. Apart from mutability, there are probably other considerations here too. A compareTo implementation in Number would have to promote all numeric values to BigDecimal because it is capable of accommodating all the Number sub-types. The implication of that promotion in terms of mathematics and performance is a bit unclear to me, but my intuition finds that solution kludgy.

    Read the article

  • LINQ to SQL: Reusable expression for property?

    - by coenvdwel
    Pardon me for being unable to phrase the title more exact. Basically, I have three LINQ objects linked to tables. One is Product, the other is Company and the last is a mapping table Mapping to store what Company sells which products and by which ID this Company refers to this Product. I am now retrieving a list of products as follows: var options = new DataLoadOptions(); options.LoadWith<Product>(p => p.Mappings); context.LoadOptions = options; var products = ( from p in context.Products select new { ProductID = p.ProductID, //BackendProductID = p.BackendProductID, BackendProductID = (p.Mappings.Count == 0) ? "None" : (p.Mappings.Count > 1) ? "Multiple" : p.Mappings.First().BackendProductID, Description = p.Description } ).ToList(); This does a single query retrieving the information I want. But I want to be able to move the logic behind the BackendProductID into the LINQ object so I can use the commented line instead of the annoyingly nested ternary operator statements for neatness and re-usability. So I added the following property to the Product object: public string BackendProductID { get { if (Mappings.Count == 0) return "None"; if (Mappings.Count > 1) return "Multiple"; return Mappings.First().BackendProductID; } } The list is still the same, but it now does a query for every single Product to get it's BackendProductID. The code is neater and re-usable, but the performance now is terrible. What I need is some kind of Expression or Delegate but I couldn't get my head around writing one. It always ended up querying for every single product, still. Any help would be appreciated!

    Read the article

  • Running a Model::find in for loop in cakephp v1.3

    - by Gaurav Sharma
    Hi all, How can I achieve the following result in cakephp: In my application a Topic is related to category, category is related to city and city is finally related to state in other words: topic belongs to category, category belongs to city , city belongs to state.. Now in the Topic controller's index action I want to find out all the topics and it's city and state. How can I do this. I can easily do this using a custom query ($this-Model-query() function ) but then I will be facing pagination difficulties. I tried doing like this function index() { $this->Topic->recursive = 0; $topics = $this->paginate(); for($i=0; $i<count($topics);$i++) { $topics[$i]['City'] = $this->Topic->Category->City->find('all', array('conditions' => array('City.id' => $topics[$i]['Category']['city_id']))); } $this->set(compact('messages')); } The method that I have adopted is not a good one (running query in a loop) Using the recursive property and setting it to highest value (2) will degrade performance and is not going to yield me state information. How shall I solve this ? Please help Thanks

    Read the article

  • Custom UIProgressView drawing weirdness

    - by Werner
    I am trying to create my own custom UIProgressView by subclassing it and then overwrite the drawRect function. Everything works as expected except the progress filling bar. I can't get the height and image right. The images are both in Retina resolution and the Simulator is in Retina mode. The images are called: "[email protected]" (28px high) and "[email protected]" (32px high). CustomProgressView.h #import <UIKit/UIKit.h> @interface CustomProgressView : UIProgressView @end CustomProgressView.m #import "CustomProgressView.h" @implementation CustomProgressView - (id)initWithFrame:(CGRect)frame { self = [super initWithFrame:frame]; if (self) { // Initialization code } return self; } // Only override drawRect: if you perform custom drawing. // An empty implementation adversely affects performance during animation. - (void)drawRect:(CGRect)rect { // Drawing code self.frame = CGRectMake(self.frame.origin.x, self.frame.origin.y, self.frame.size.width, 16); UIImage *progressBarTrack = [[UIImage imageNamed:@"progressBarTrack"] resizableImageWithCapInsets:UIEdgeInsetsZero]; UIImage *progressBar = [[UIImage imageNamed:@"progressBar"] resizableImageWithCapInsets:UIEdgeInsetsMake(4, 4, 5, 4)]; [progressBarTrack drawInRect:rect]; NSInteger maximumWidth = rect.size.width - 2; NSInteger currentWidth = floor([self progress] * maximumWidth); CGRect fillRect = CGRectMake(rect.origin.x + 1, rect.origin.y + 1, currentWidth, 14); [progressBar drawInRect:fillRect]; } @end The resulting ProgressView has the right height and width. It also fills at the right percentage (currently set at 80%). But the progress fill image isn't drawn correctly. Does anyone see where I go wrong?

    Read the article

  • How to control virtual memory management in linux?

    - by chmike
    I'm writing a program that uses an mmap file to hold a huge buffer organized as an array of 64 MB blocks. The blocks are used to aggregate data received from different hosts through the network. As a consequence the total data size written in each block is not known in advance. Most of the time it is only 2MB but in some cases it can be up to 20MB or more. The data doesn't stay long in the buffer. 90% is deleted after less than a second and the rest is transmitted to another host. I would like to know if there is a way to tell the virtual memory manager that ram pages are not dirty anymore when data is deleted. Should I use mmap and munmap when a block is used and released to control the virtual memory ? What would be the overhead of doing this ? Also, some colleagues expressed concerns about the performance impact of allocating such a big mmap space. I expect it to behave like a swap file so that only dirty pages are to be considered.

    Read the article

  • Entity Framework autoincrement key

    - by Tommy Ong
    I'm facing an issue of duplicated incremental field on a concurrency scenario. I'm using EF as the ORM tool, attempting to insert an entity with a field that acts as a incremental INT field. Basically this field is called "SequenceNumber", where each new record before insert, will read the database using MAX to get the last SequenceNumber, append +1 to it, and saves the changes. Between the getting of last SequenceNumber and Saving, that's where the concurrency is happening. I'm not using ID for SequenceNumber as it is not a unique constraint, and may reset on certain conditions such as monthly, yearly, etc. InvoiceNumber | SequenceNumber | DateCreated INV00001_08_14 | 1 | 25/08/2014 INV00001_08_14 | 1 | 25/08/2014 <= (concurrency is creating two SeqNo 1) INV00002_08_14 | 2 | 25/08/2014 INV00003_08_14 | 3 | 26/08/2014 INV00004_08_14 | 4 | 27/08/2014 INV00005_08_14 | 5 | 29/08/2014 INV00001_09_14 | 1 | 01/09/2014 <= (sequence number reset) Invoice number is formatted based on the SequenceNumber. After some research I've ended up with these possible solutions, but wanna know the best practice 1) Optimistic Concurrency, locking the table from any reads until the current transaction is completed (not fancy of this idea as I guess performance will be of a great impact?) 2) Create a Stored Procedure solely for this purpose, does select and insert on a single statement as such concurrency is at minimum (would prefer a EF based approach if possible)

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

  • How do the major C# DI/IoC frameworks compare?

    - by Slomojo
    At the risk of stepping into holy war territory, What are the strengths and weaknesses of these popular DI/IoC frameworks, and could one easily be considered the best? ..: Ninject Unity Castle.Windsor Autofac StructureMap Are there any other DI/IoC Frameworks for C# that I haven't listed here? In context of my use case, I'm building a client WPF app, and a WCF/SQL services infrastructure, ease of use (especially in terms of clear and concise syntax), consistent documentation, good community support and performance are all important factors in my choice. Update: The resources and duplicate questions cited appear to be out of date, can someone with knowledge of all these frameworks come forward and provide some real insight? I realise that most opinion on this subject is likely to be biased, but I am hoping that someone has taken the time to study all these frameworks and have at least a generally objective comparison. I am quite willing to make my own investigations if this hasn't been done before, but I assumed this was something at least a few people had done already. Second Update: If you do have experience with more than one DI/IoC container, please rank and summarise the pros and cons of those, thank you. This isn't an exercise in discovering all the obscure little containers that people have made, I'm looking for comparisons between the popular (and active) frameworks.

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >