Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 421/883 | < Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >

  • HTML relative links on various domains

    - by Adam Kiss
    I have quickie: When you code/develop themes, how do you link to various files in your html/css code? Example: We at our firm use mostly <base target="http://whatever"> in our main template and then just <img src="./images/file.png"> in our html, "/category/page" as links and something alike in our css. However, when testing on different machines, we use ip address rather than localhost on main dev station of coder, so all base links don't work (because localhost goes to viewing machine, not coder's, in our network). Same thing happens when updating pages - on dev server, we have to edit base target, so browsing site won't take us to live site - this part is actually rather simple PHP (if ... echo else echo something else), but it still not solve problem of more coding-testing problems. So, my question is, how do YOU solve it? How do you use relative links, which basically don't care for what domain is the page on and don't care for url rewrite? (because ../images/ is different for / and different for /something/somethingElse/page)?

    Read the article

  • Does running IIS7 in classic mode affect MVC output caching?

    - by Bob
    I have a need to run an application in classic mode for backwards compatibility with a specific application, and am trying to understand what kind of impact that will have on the performance of an MVC application that is running on the site. If we put a few static file maps (for .js, .css, .png, etc) above the ASP.NET wildcard map to reduce the amount of processing by the ASP.NET handler, will we be approaching the integrated mode in terms of performance? The thing i'm primarily concerned with is any effect this might have on output caching. I understand that integrated mode might (?) allow for the output cache to handle non ASP.NET content, but that isn't really a concern. We're more interested in ensuring that the MVC application has full use of the output cache. Empirically i've found that the two configurations operate on par when things go well, but if the page references resources that are not available, the integrated mode tends to fail much more quickly than the classic mode (e.g. 500 ms vs 10 seconds), reducing 'hang time' on the page load. Thanks for any feedback.

    Read the article

  • Best way to associate data files with particular tests in RSpec / Ruby

    - by Bill T
    For my RSpec tests I would to automatically associate data files with each test. To clarify, if my tests each require an xml file as input data and then some xpath statements to validate the responses they get back I would like to externalize the xml and xpath as files and have the testing framework easily associate them with the particular test being run by using the unique ID of the test as the file(s) name. I tried to get this behavior but the solution isn't very clean. I wrote a helper method that takes the value of "description" and combines it with FILE to create a unique identifier which is set into a global variable that other utilities can access. The unique identifier is used to associate the data files I need. I have to call this helper method as the first line of every test, which is ugly. If I have an RSpec example that looks like this: describe "Basic functions of this server I'm testing" do it "should give me back a response" do # Sets a global var to: "my_tests_spec.rb_should_give_me_back_a_response" TestHelper::who_am_i __FILE__, description ... end end Is there some better/cleaner/slicker way I can get an unique ID for each test that I could use to associate data files with? Perhaps something build into RSpec I'm unaware of? Thank you, -Bill

    Read the article

  • How to read the network file.

    - by ungalnanban
    I'm using Net::FTP for getting a remote hosted file. I want to read the file. I don't want to get the file from the remote host to my host (localhost), but I need to read the file content only. Is there any module to do this? use strict; use warnings; use Data::Dumper; use Net::FTP; my $ftp = Net::FTP->new("192.168.8.20", Debug => 0) or die "Cannot connect to some.host.name: $@"; $ftp->login("root",'root123') or die "Cannot login ", $ftp->message; $ftp->cwd("SUGUMAR") or die "Cannot change working directory ", $ftp->message; $ftp->get("Testing.txt") or die "get failed ", $ftp->message; $ftp->quit; In the above sample program, I get the file from 192.168.8.20. Then I will read the file and do the operation. I don't want to place the file in my local system. I need to read the Testing.txt file content.

    Read the article

  • How to keep windows from paging block of memory

    - by photo_tom
    We are working on a Vista/Windows 7 applicaiton that will be running in 64 bit mode using VS2008/C++. We will be needing to cache hundreds of 2-3 mb blobs of data in RAM for performance reasons up to some memory limit. Our usage profile is such that we cannot read the data in fast enough if it is all on the the disk. Cached Memory usage will be larger than 1gb memory used. For this to work well, we need to ensure that Windows does not page this memory out as it will defeat the purpose of why we are doing this. I've done a fair amount of research and cannot find documenation that states exactly how to do this. I've seen several references that infer memory mapped files work this way. Is there an expert who can clarify this for me? I'm aware there are other programs that we could adapt to do this, for example, splitting the blobs and loading into memcache or inmemory databases, but they all have too many problems with performance or code complexity. Suggestions?

    Read the article

  • Weird behavior of matching array keys after json_decode()

    - by arnorhs
    I've got some very weird behavior in my PHP code. I don't know if this is actually a good SO question, since it almost looks like a bug in PHP. I had this problem in a project of mine and isolated the problem: // json object that will be converted into an array $json = '{"5":"88"}'; $jsonvar = (array) json_decode($json); // notice: Casting to an array // Displaying the array: var_dump($jsonvar); // Testing if the key is there var_dump(isset($jsonvar["5"])); var_dump(isset($jsonvar[5])); That code outputs the following: array(1) { ["5"]=> string(2) "88" } bool(false) bool(false) The big problem: Both of those tests should produce bool(true) - if you create the same array using regular php arrays, this is what you'll see: // Let's create a similar PHP array in a regular manner: $phparr = array("5" => "88"); // Displaying the array: var_dump($phparr); // Testing if the key is there var_dump(isset($phparr["5"])); var_dump(isset($phparr[5])); The output of that: array(1) { [5]=> string(2) "88" } bool(true) bool(true) So this doesn't really make sense. I've tested this on two different installations of PHP/apache. You can copy-paste the code to a php file yourself to test it. It must have something to do with the casting from an object to an array.

    Read the article

  • Line End Problem Reading with Scanner Class in Java

    - by dikbas
    I am not an experienced Java programmer and i'm trying to write some text to a file and then read it with Scanner. I know there are lots of ways of doing this, but i want to write records to file with delimiters, then read the pieces. The problem is so small. When I look the output some printing isn't seen(shown in below). I mean the bold line in the Output that is only written "Scanner". I would be appreciated if anyone can answer why "String: " isn't seen there. (Please answer just what i ask) I couldn't understand if it is a simple printing problem or a line end problem with "\r\n". Here is the code and output: import java.io.FileReader; import java.io.FileWriter; import java.io.IOException; import java.util.Scanner; public class Tmp { public static void main(String args[]) throws IOException { int i; boolean b; String str; FileWriter fout = new FileWriter("test.txt"); fout.write("Testing|10|true|two|false\r\n"); fout.write("Scanner|12|one|true|"); fout.close(); FileReader fin = new FileReader("Test.txt"); Scanner src = new Scanner(fin).useDelimiter("[|\\*]"); while (src.hasNext()) { if (src.hasNextInt()) { i = src.nextInt(); System.out.println("int: " + i); } else if (src.hasNextBoolean()) { b = src.nextBoolean(); System.out.println("boolean: " + b); } else { str = src.next(); System.out.println("String: " + str); } } fin.close(); } } Here is the output: String: Testing int: 10 boolean: true String: two String: false Scanner int: 12 String: one boolean: true

    Read the article

  • How do I capture keystrokes on the web?

    - by Sean
    Using PHP, JS, or HTML (or something similar) how would I capture keystokes? Such as if the user presses ctrl+f or maybe even just f, a certain function will happen. ++++++++++++++++++++++++EDIT+++++++++++++++++++ Ok, so is this correct, because I can't get it to work. And I apologize for my n00bness is this is an easy question, new to jQuery and still learning more and more about JS. <script> var element = document.getElementById('capture'); element.onkeypress = function(e) { var ev = e || event; if(ev.keyCode == 70) { alert("hello"); } } </script> <div id="capture"> Hello, Testing 123 </div> ++++++++++++++++EDIT++++++++++++++++++ Here is everything, but I can't get it to work: <link rel="icon" href="favicon.ico" type="image/x-icon"> <style> * { margin: 0px } div { height: 250px; width: 630px; overflow: hidden; vertical-align: top; position: relative; background-color: #999; } iframe { position: absolute; left: -50px; top: -130px; } </style> <script> document.getElementsByTagName('body')[0].onkeyup = function(e) { var ev = e || event; if(ev.keyCode == 70 && ev.ctrlKey) { //control+f alert("hello"); } } </script> <div id="capture"> Hello, Testing 123<!--<iframe src="http://www.pandora.com/" scrolling="no" width="1000" height="515"frameborder="0"></iframe>--> </div>

    Read the article

  • AIR File.resolvePath won't work anymore

    - by Palleas
    Hi all, I'm having a very strange issue, it looks like my application can't create file anymore. It works w/ directories, but the so-many-times-used resolvePath() methods doesn't. Here is what I do : var databaseFileContent : File = new File(File.desktopDirectory.nativePath + "/testing"); databaseFileContent.createDirectory(); databaseFileContent.resolvePath("test"); (Here I'm trying on desktop but that's the same w/ applicationStorageDirectory) When I execute this, it works only for the "testing" folder which is actually created, but my file isn't. I tried to create another application, doing this : trace(File.desktopDirectory.resolvePath("maiswtf.db").exists); trace(File.applicationStorageDirectory.resolvePath("wtf.db").exists); Both are displaying "false". Am I missing something here? I have another application with this code : var databaseFileContent : File = File.applicationStorageDirectory.resolvePath(File.separator + "sitra.db"); When I run this one, it works perfectly! My file is created at /sitra.db! Any hints? I thinks I'm going mad :/ Thanks!

    Read the article

  • ASP.Net event only being raised every other time?

    - by eftpotrm
    I have an ASP.Net web user control which represents a single entry in a list. To allow users to reorder the items, each item has buttons to move the item up or down the list. Clicking on one of these raises an event to the parent page, which then shuffles the items in the placeholder control. Code fragments from the list entry: Public Event UpClicked As System.EventHandler Protected Sub btnUp_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnUp.Click RaiseEvent UpClicked(Me, New EventArgs()) End Sub And the parent container: rem (within the code to add an individual item to the placeholder) AddHandler l_oItem.UpClicked, AddressOf UpClicked Protected Sub UpClicked(ByVal sender As Object, ByVal e As EventArgs) MoveItem(DirectCast(sender, ScriptListItem), -1) End Sub It originally looked in testing like every other time the value for sender (verified by its properties) that reaches UpClicked is of an adjacent ListItem, not the one I've just clicked on - the first click is always wrong, then the second for the correct control. At present, testing appears to show that the button's click event is just being ignored every other time through. Breakpoints on the click events within the control simply aren't being hit, though the events are definitely being established. Why?

    Read the article

  • Iteration over a linq to sql query is very slow.

    - by devzero
    I have a view, AdvertView in my database, this view is a simple join between some tables (advert, customer, properties). Then I have a simple linq query to fetch all adverts for a customer: public IEnumerable<AdvertView> GetAdvertForCustomerID(int customerID) { var advertList = from advert in _dbContext.AdvertViews where advert.Customer_ID.Equals(customerID) select advert; return advertList; } I then wish to map this to modelItems for my MVC application: public List<AdvertModelItem> GetAdvertsByCustomer(int customerId) { List<AdvertModelItem> lstAdverts = new List<AdvertModelItem>(); List<AdvertView> adViews = _dataHandler.GetAdvertForCustomerID(customerId).ToList(); foreach(AdvertView adView in adViews) { lstAdverts.Add(_advertMapper.MapToModelClass(adView)); } return lstAdverts; } I was expecting to have some performance issues with the SQL, but the problem seems to be with the .ToList() function. I'm using ANTS performance profiler and it reports that the total runtime of the function is 1.400ms, and 850 of those is with the ToList(). So my question is, why does the tolist function take such a long time here?

    Read the article

  • Dictionaries with more than one key per value in Python

    - by nickname
    I am attempting to create a nice interface to access a data set where each value has several possible keys. For example, suppose that I have both a number and a name for each value in the data set. I want to be able to access each value using either the number OR the name. I have considered several possible implementations: Using two separate dictionaries, one for the data values organized by number, and one for the data values organized by name. Simply assigning two keys to the same value in a dictionary. Creating dictionaries mapping each name to the corresponding number, and vice versa Attempting to create a hash function that maps each name to a number, etc. (related to the above) Creating an object to encapsulate all three pieces of data, then using one key to map dictionary keys to the objects and simply searching the dictionary to map the other key to the object. None of these seem ideal. The first seems ugly and unmaintainable. The second also seems fragile. The third/fourth seem plausible, but seem to require either much manual specification or an overly complex implementation. Finally, the fifth loses constant-time performance for one of the lookups. In C/C++, I believe that I would use pointers to reference the same piece of data from different keys. I know that the problem is rather similar to a database lookup problem by a non-key column, however, I would like (if possible), to maintain the approximate O(1) performance of Python dictionaries. What is the most Pythonic way to achieve this data structure?

    Read the article

  • Forming triangles from points and relations

    - by SiN
    Hello, I want to generate triangles from points and optional relations between them. Not all points form triangles, but many of them do. In the initial structure, I've got a database with the following tables: Nodes(id, value) Relations(id, nodeA, nodeB, value) Triangles(id, relation1_id, relation2_id, relation3_id) In order to generate triangles from both nodes and relations table, I've used the following query: INSERT INTO Triangles SELECT t1.id, t2.id , t3.id, FROM Relations t1, Relations t2, Relations t3 WHERE t1.id < t2.id AND t3.id > t1.id AND ( t1.nodeA = t2.nodeA AND (t3.nodeA = t1.nodeB AND t3.nodeB = t2.nodeB OR t3.nodeA = t2.nodeB AND t3.nodeB = t1.nodeB) OR t1.nodeA = t2.nodeB AND (t3.nodeA = t1.nodeB AND t3.nodeB = t2.nodeA OR t3.nodeA = t2.nodeA AND t3.nodeB = t1.nodeB) ) It's working perfectly on small sized data. (~< 50 points) In some cases however, I've got around 100 points all related to each other which leads to thousands of relations. So when the expected number of triangles is in the hundreds of thousands, or even in the millions, the query might take several hours. My main problem is not in the select query, while I see it execute in Management Studio, the returned results slow. I received around 2000 rows per minute, which is not acceptable for my case. As a matter of fact, the size of operations is being added up exponentionally and that is terribly affecting the performance. I've tried doing it as a LINQ to object from my code, but the performance was even worse. I've also tried using SqlBulkCopy on a reader from C# on the result, also with no luck. So the question is... Any ideas or workarounds?

    Read the article

  • is it better to query database or grab from file? php & mysql

    - by pfunc
    I am keeping a large amount of words in a database that I want to match up articles to. I was thinking that it would just be better to keep these words in an array and grab that array whenever needed instead of querying the database every time (since the words won't be changing that much). Is there much performance difference in doing this? And if I were to do this, how to I write a script that writes the array to a a new php file. I tried writing the array like so: while( $row = mysql_fetch_assoc($query)) { $newArray[] = $row; } $fp = fopen('noWordsArr.php', 'w'); fwrite($fp, $newArray); fclose($fp); But all I get in the other file is "Array". So i figured I could write this and then write have a chron hit up the file every few days or so in case things have changed. But I guess if there is no performance advantage then it prob won't be necessary and I can just query the database every time I need to access the words.

    Read the article

  • Function Returning Negative Value

    - by Geowil
    I still have not run it through enough tests however for some reason, using certain non-negative values, this function will sometimes pass back a negative value. I have done a lot of manual testing in calculator with different values but I have yet to have it display this same behavior. I was wondering if someone would take a look at see if I am missing something. float calcPop(int popRand1, int popRand2, int popRand3, float pERand, float pSRand) { return ((((((23000 * popRand1) * popRand2) * pERand) * pSRand) * popRand3) / 8); } The variables are all contain randomly generated values: popRand1: between 1 and 30 popRand2: between 10 and 30 popRand3: between 50 and 100 pSRand: between 1 and 1000 pERand: between 1.0f and 5500.0f which is then multiplied by 0.001f before being passed to the function above Edit: Alright so after following the execution a bit more closely it is not the fault of this function directly. It produces an infinitely positive float which then flips negative when I use this code later on: pPMax = (int)pPStore; pPStore is a float that holds popCalc's return. So the question now is, how do I stop the formula from doing this? Testing even with very high values in Calculator has never displayed this behavior. Is there something in how the compiler processes the order of operations that is causing this or are my values simply just going too high? If the later I could just increase the division to 16 I think.

    Read the article

  • Problem with derived ControlTemplates in WPF

    - by Frank Fella
    The following xaml code works: <Window x:Class="DerivedTemplateBug.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:DerivedTemplateBug" Title="Window1" Height="300" Width="300"> <Button> <Button.Template> <ControlTemplate> <Border BorderBrush="Black" BorderThickness="2"> <TextBlock>Testing!</TextBlock> </Border> </ControlTemplate> </Button.Template> </Button> </Window> Now, if you define the following data template: using System.Windows.Controls; namespace DerivedTemplateBug { public class DerivedTemplate : ControlTemplate { } } And then swap the ControlTemplate for the derived class: <Window x:Class="DerivedTemplateBug.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:DerivedTemplateBug" Title="Window1" Height="300" Width="300"> <Button> <Button.Template> <local:DerivedTemplate> <Border BorderBrush="Black" BorderThickness="2"> <TextBlock>Testing!</TextBlock> </Border> </local:DerivedTemplate> </Button.Template> </Button> </Window> You get the following error: Invalid ContentPropertyAttribute on type 'DerivedTemplateBug.DerivedTemplate', property 'Content' not found. Can anyone tell me why this is the case?

    Read the article

  • What are CDN Best Practices?

    - by Wild Thing
    Hi, I have recently started using the Rackspace Cloudfiles CDN (Limelight), about which I have some questions: I am using jQuery, jQuery UI and jQuery tools in addition to custom JS code. Also, my site is written in ASP.Net, which means there is some ASP.Net generated JS code. Right now what I have done is that I have combined all of the js (including the jquery code), except the ASP.Net generated JS into one file. I am hosting this on the Rackspace CDN. I am wondering if it would make more sense to just get the jQuery, jQuery UI files from the Google hosted CDN (which I suspect would work very well in serving these files, since they will be in many users' cache already)? This would mean one extra HTTP request, so I'm not sure if it'll help. Right now I have multiple containers for my assets. For example, in Rackspace I have 3 containers: JS, CSS and Images. The URL subdomain for all 3 is different. Will that lead to a performance penalty? Should I just use one container (and thus one domain for the CDN)? Is there a way of having the MS ASP.Net generated JS loaded from MS CDN? Would this have a performance hit as per the above question? Thanks in advance, WT

    Read the article

  • Rails creating users, roles, and projects

    - by Bobby
    I am still fairly new to rails and activerecord, so please excuse any oversights. I have 3 models that I'm trying to tie together (and a 4th to actually do the tying) to create a permission scheme using user-defined roles. class User < ActiveRecord::Base has_many :user_projects has_many :projects, :through => :user_projects has_many :project_roles, :through => :user_projects end class Project < ActiveRecord::Base has_many :user_projects has_many :users, :through => :user_projects has_many :project_roles end class ProjectRole < ActiveRecord::Base belongs_to :projects belongs_to :user_projects end class UserProject < ActiveRecord::Base belongs_to :user belongs_to :project has_one :project_role attr_accessible :project_role_id end The project_roles model contains a user-defined role name, and booleans that define whether the given role has permissions for a specific task. I'm looking for an elegant solution to reference that from anywhere within the project piece of my application easily. I do already have a role system implemented for the entire application. What I'm really looking for though is that the users will be able to manage their own roles on a per-project basis. Every project gets setup with an immutable default admin role, and the project creator gets added upon project creation. Since the users are creating the roles, I would like to be able to pull a list of role names from the project and user models through association (for display purposes), but for testing access, I would like to simply reference them by what they have access to without having reference them by name. Perhaps something like this? def has_perm?(permission, user) # The permission that I'm testing user.current_project.project_roles.each do |role| if role.send(permission) # Not sure that's right... do_stuff end end end I think I'm in over my head on this one because I keep running in circles on how I can best implement this.

    Read the article

  • Why cant Git merge file changes with a modified parent/master.

    - by Andy
    I have a file with one line in it. I create a branch and add a second line to the same file. Save and commit to the branch. I switch back to the master. And add a different, second line to the file. Save and commit to the master. So there's now 3 unique lines in total. If I now try and merge the branch back to the master, it suffers a merge conflict. Why cant Git simple merge each line, one after the other? My attempt at merge behaves something like this: PS D:\dev\testing\test1> git merge newbranch Auto-merging hello.txt CONFLICT (content): Merge conflict in hello.txt Automatic merge failed; fix conflicts and then commit the result. PS D:\dev\testing\test1> git diff diff --cc hello.txt index 726eeaf,e48d31a..0000000 --- a/hello.txt +++ b/hello.txt @@@ -1,2 -1,2 +1,6 @@@ This is the first line. - New line added by master. -Added a line in newbranch. ++<<<<<<< HEAD ++New line added by master. ++======= ++Added a line in newbranch. ++>>>>>>> newbranch Is there a way to make it slot lines in automatically, one after the other?

    Read the article

  • Replacing symbol from object file at compile time. For example swapping out main

    - by Anthony Sottile
    Here's the use case: I have a .cpp file which has functions implemented in it. For sake of example say it has the following: [main.cpp] #include <iostream> int foo(int); int foo(int a) { return a * a; } int main() { for (int i = 0; i < 5; i += 1) { std::cout << foo(i) << std::endl; } return 0; } I want to perform some amount of automated testing on the function foo in this file but would need to replace out the main() function to do my testing. Preferably I'd like to have a separate file like this that I could link in over top of that one: [mymain.cpp] #include <iostream> #include <cassert> extern int foo(int); int main() { assert(foo(1) == 1); assert(foo(2) == 4); assert(foo(0) == 0); assert(foo(-2) == 4); return 0; } I'd like (if at all possible) to avoid changing the original .cpp file in order to do this -- though this would be my approach if this is not possible: do a replace for "(\s)main\s*\(" == "\1__oldmain\(" compile as usual. The environment I am targeting is a linux environment with g++.

    Read the article

  • Flash Player: Any remedy for the stale video image data problem (in a reused NetStream object)?

    - by amn
    Has anyone experienced stale stills of a previous playback for a reused NetStream object? If so, what are the workarounds for this, except re-creating the object (which eats performance and time)? It is hard to reuse NetStream objects because of a (in my opinion) fundamental issue with NetStream objects - when you 'close' a playing stream and at a later point issue a 'play' call on it again with a different name, the stream appears to still contain a stale image lingering from previous playback, and this is of course displayed in the Video object for a moment - the moment I assume it takes for new stream data to become available from server. Because of this behavior, to improve my users' visual experience, I simply discard a NetStream object after a playback session, and assign a new NetStream object to the same variable, set it up, and play something else. It appears to work - no stale image - but what bugs me is that it's a work around and costs performance (construction and setting up the object again - event listeners and 'client' delegates and more memory usage - NetStream objects are not garbage collected immediately, it takes some time). It would be really nice to REALLY be able to reuse a stream. I am thinking of something akin to Video.clear method, but for the NetStream class. Am I missing something?

    Read the article

  • JAR files, don't they just bloat and slow Java down?

    - by Josamoto
    Okay, the question might seem dumb, but I'm asking it anyways. After struggling for hours to get a Spring + BlazeDS project up and running, I discovered that I was having problems with my project as a result of not including the right dependencies for Spring etc. There were .jars missing from my WEB-INF/lib folder, yes, silly me. After a while, I managed to get all the .jar files where they belong, and it comes at a whopping 12.5MB at that, and there's more than 30 of them! Which concerns me, but it probably and hopefully shouldn't be concerned. How does Java operate in terms of these JAR files, they do take up quite a bit of hard drive space, taking into account that it's compressed and compiled source code. So that can really quickly populate a lot of RAM and in an instant. My questions are: Does Java load an entire .jar file into memory when say for instance a class in that .jar is instantiated? What about stuff that's in the .jar that never gets used. Do .jars get cached somehow, for optimized application performance? When a single .jar is loaded, I understand that the thing sits in memory and is available across multiple HTTP requests (i.e. for the lifetime of the server instance running), unlike PHP where objects are created on the fly with each request, is this assumption correct? When using Spring, I'm thinking, I had to include all those fiddly .jars, wouldn't I just be better off just using native Java, with say at least and ORM solution like Hibernate? So far, Spring just took extra time configuring, extra hard drive space, extra memory, cpu consumption, so I'm concerned that the framework is going to cost too much application performance just to get for example, IoC implemented with my BlazeDS server. There still has to come ORM, a unit testing framework and bits and pieces here and there. It's just so easy to bloat up a project quickly and irresponsibly easily. Where do I draw the line?

    Read the article

  • How can I optimize this loop?

    - by Moshe
    I've got a piece of code that returns a super-long string that represents "search results". Each result is represented by a double HTML break symbol. For example: Result1<br><br>Result 2<br><br>Result3 I've got the following loop that takes each result and puts it into an array, stripping out the break indicator, "kBreakIndicator" (<br><br>). The problem is that this lopp takes way too long to execute. With a few results it's fine, but once you hit a hundred results, it's about 20-30 seconds slower. It's unacceptable performance. What can I do to improve performance? Here's my code: content is the original NSString. NSMutableArray *results = [[NSMutableArray alloc] init]; //Loop through the string of results and take each result and put it into an array while(![content isEqualToString:@""]){ NSRange rangeOfResult = [content rangeOfString:kBreakIndicator]; NSString *temp = (rangeOfResult.location != NSNotFound) ? [content substringToIndex:rangeOfResult.location] : nil; if (temp) { [results addObject:temp]; content = [[[content stringByReplacingOccurrencesOfString:[NSString stringWithFormat:@"%@%@", temp, kBreakIndicator] withString:@""] mutableCopy] autorelease]; }else{ [results addObject:[content description]]; content = [[@"" mutableCopy] autorelease]; } } //Do something with the results array. [results release];

    Read the article

  • Memory mapping of files and system cache behavior in WinXP

    - by Canopus
    Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB. There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system. My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases? I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

< Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >