Search Results

Search found 23634 results on 946 pages for 'performance management'.

Page 42/946 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • sql-server performance optimization by removing print statements

    - by AG
    We're going through a round of sql-server stored procedure optimizations. The one recommendation we've found that clearly applies for us is 'SET NOCOUNT ON' at the top of each procedure. (Yes, I've seen the posts that point out issues with this depending on what client objects you run the stored procedures from but these are not issues for us.) So now I'm just trying to add in a bit of common sense. If the benefit of SET NOCOUNT ON is simply to reduce network traffic by some small amount every time, wouldn't it also make sense to turn off all the PRINT statements we have in the stored procedures that we only use for debugging? I can't see how it can hurt performance. OTOH, it's a bit of a hassle to implement due to the fact that some of the print statements are the only thing within else clauses, so you can't just always comment out the one line and be done. The change carries some amount of risk so I don't want to do it if it isn't going to actually help. But I don't see eliminating print statements mentioned anywhere in articles on optimization. Is that because it is so obvious no one bothers to mention it?

    Read the article

  • WPF drawing performance with large numbers of geometries

    - by MyFaJoArCo
    Hello, I have problems with WPF drawing performance. There are a lot of small EllipseGeometry objects (1024 ellipses, for example), which are added to three separate GeometryGroups with different foreground brushes. After, I render it all on simple Image control. Code: DrawingGroup tmpDrawing = new DrawingGroup(); GeometryGroup onGroup = new GeometryGroup(); GeometryGroup offGroup = new GeometryGroup(); GeometryGroup disabledGroup = new GeometryGroup(); for (int x = 0; x < DisplayWidth; ++x) { for (int y = 0; y < DisplayHeight; ++y) { if (States[x, y] == true) onGroup.Children.Add(new EllipseGeometry(new Rect((double)x * EDGE, (double)y * EDGE, EDGE, EDGE))); else if (States[x, y] == false) offGroup.Children.Add(new EllipseGeometry(new Rect((double)x * EDGE, (double)y * EDGE, EDGE, EDGE))); else disabledGroup.Children.Add(new EllipseGeometry(new Rect((double)x * EDGE, (double)y * EDGE, EDGE, EDGE))); } } tmpDrawing.Children.Add(new GeometryDrawing(OnBrush, null, onGroup)); tmpDrawing.Children.Add(new GeometryDrawing(OffBrush, null, offGroup)); tmpDrawing.Children.Add(new GeometryDrawing(DisabledBrush, null, disabledGroup)); DisplayImage.Source = new DrawingImage(tmpDrawing); It works fine, but takes too much time - 0.5s on Core 2 Quad, 2s on Pentium 4. I need <0.1s everywhere. All Ellipses, how you can see, are equal. Background of control, where is my DisplayImage, is solid (black, for example), so we can use this fact. I tried to use 1024 Ellipse elements instead of Image with EllipseGeometries, and it was working much faster (~0.5s), but not enough. How to speed up it? Regards, Oleg Eremeev P.S. Sorry for my English.

    Read the article

  • MySQL Normalization stored procedure performance

    - by srkiNZ84
    Hi, I've written a stored procedure in MySQL to take values currently in a table and to "Normalize" them. This means that for each value passed to the stored procedure, it checks whether the value is already in the table. If it is, then it stores the id of that row in a variable. If the value is not in the table, it stores the newly inserted value's id. The stored procedure then takes the id's and inserts them into a table which is equivalent to the original de-normailized table, but this table is fully normalized and consists of mainly foreign keys. My problem with this design is that the stored procedure takes approximately 10ms or so to return, which is too long when you're trying to work through some 10million records. My suspicion is that the performance is to do with the way in which I'm doing the inserts. i.e. INSERT INTO TableA (first_value) VALUES (argument_from_sp) ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(id); SET @TableAId = LAST_INSERT_ID(); The "ON DUPLICATE KEY UPDATE" is a bit of a hack, due to the fact that on a duplicate key I don't want to update anything but rather just return the id value of the row. If you miss this step though, the LAST_INSERT_ID() function returns the wrong value when you're trying to run the "SET ..." statement. Does anyone know of a better way to do this in MySQL? Thank you

    Read the article

  • visualising piano performance evaluation

    - by Dolphin
    I need to develop a performance evaluator for piano playing. Based on a midi generated from sheet music, I need to evaluate the midi of the actual playing (midi keyboard). I'm planning to evaluate the playing based on note pitch, duration and loudness. The evaluation is I suppose a comparison of the notes of the sheet music and playing in midi. But I have no idea how I can visualise (i.e. show where the person have gone wrong) this evaluation process. i.e. maybe show both the notation and highlight which note has gone wrong. But how can I show any of this in some graphical form? Or more precisely on a stave (a music score) itself. I have note details (pitch, duration) and score details (key and time signature) stored in a table, and I'm using Java. But I have no clue as in how I can put all this into graphical form. Any insight is most gratefully appreciated. Advance thanks

    Read the article

  • Performance of VIEW vs. SQL statement

    - by Matt W.
    I have a query that goes something like the following: select <field list> from <table list> where <join conditions> and <condition list> and PrimaryKey in (select PrimaryKey from <table list> where <join list> and <condition list>) and PrimaryKey not in (select PrimaryKey from <table list> where <join list> and <condition list>) The sub-select queries both have multiple sub-select queries of their own that I'm not showing so as not to clutter the statement. One of the developers on my team thinks a view would be better. I disagree in that the SQL statement uses variables passed in by the program (based on the user's login Id). Are there any hard and fast rules on when a view should be used vs. using a SQL statement? What kind of performance gain issues are there in running SQL statements on their own against regular tables vs. against views. (Note that all the joins / where conditions are against indexed columns, so that shouldn't be an issue.) EDIT for clarification... Here's the query I'm working with: select obj_id from object where obj_id in( (select distinct(sec_id) from security where sec_type_id = 494 and ( (sec_usergroup_id = 3278 and sec_usergroup_type_id = 230) or (sec_usergroup_id in (select ug_gi_id from user_group where ug_ui_id = 3278) and sec_usergroup_type_id = 231) ) and sec_obj_id in ( select obj_id from object where obj_ot_id in (select of_ot_id from obj_form left outer join obj_type on ot_id = of_ot_id where ot_app_id = 87 and of_id in (select sec_obj_id from security where sec_type_id = 493 and ( (sec_usergroup_id = 3278 and sec_usergroup_type_id = 230) or (sec_usergroup_id in (select ug_gi_id from user_group where ug_ui_id = 3278) and sec_usergroup_type_id = 231) ) ) and of_usage_type_id = 131 ) ) ) ) or (obj_ot_id in (select of_ot_id from obj_form left outer join obj_type on ot_id = of_ot_id where ot_app_id = 87 and of_id in (select sec_obj_id from security where sec_type_id = 493 and ( (sec_usergroup_id = 3278 and sec_usergroup_type_id = 230) or (sec_usergroup_id in (select ug_gi_id from user_group where ug_ui_id = 3278) and sec_usergroup_type_id = 231) ) ) and of_usage_type_id = 131 ) and obj_id not in (select sec_obj_id from security where sec_type_id = 494) )

    Read the article

  • Android map performance with > 800 overlays of KML data

    - by span
    I have some a shape file which I have converted to a KML file that I wish to read coordinates from and then draw paths between the coordinates on a MapView. With the help of this great post: How to draw a path on a map using kml file? I have been able to read the the KML into an ArrayList of "Placemarks". This great blog post then showed how to take a list of GeoPoints and draw a path: http://djsolid.net/blog/android---draw-a-path-array-of-points-in-mapview The example in the above post only draws one path between some points however and since I have many more paths than that I am running into some performance problems. I'm currently adding a new RouteOverlay for each of the separate paths. This results in me having over 800 overlays when they have all been added. This has a performance hit and I would love some input on what I can do to improve it. Here are some options I have considered: Try to add all the points to a List which then can be passed into a class that will extend Overlay. In that new class perhaps it would be possible to add and draw the paths in a single Overlay layer? I'm not sure on how to implement this though since the paths are not always intersecting and they have different start and end points. At the moment I'm adding each path which has several points to it's own list and then I add that to an Overlay. That results in over 700 overlays... Simplify the KML or SHP. Instead of having over 700 different paths, perhaps there is someway to merge them into perhaps 100 paths or less? Since alot of paths are intersected at some point it should be possible to modify the original SHP file so that it merges all intersections. Since I have never worked with these kinds of files before I have not been able to find a way to do this in GQIS. If someone knows how to do this I would love for some input on that. Here is a link to the group of shape files if you are interested: http://danielkvist.net/cprg_bef_cbana_polyline.shp http://danielkvist.net/cprg_bef_cbana_polyline.shx http://danielkvist.net/cprg_bef_cbana_polyline.dbf http://danielkvist.net/cprg_bef_cbana_polyline.prj Anyway, here is the code I'm using to add the Overlays. Many thanks in advance. RoutePathOverlay.java package net.danielkvist; import java.util.List; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.Paint; import android.graphics.Path; import android.graphics.Point; import android.graphics.RectF; import com.google.android.maps.GeoPoint; import com.google.android.maps.MapView; import com.google.android.maps.Overlay; import com.google.android.maps.Projection; public class RoutePathOverlay extends Overlay { private int _pathColor; private final List<GeoPoint> _points; private boolean _drawStartEnd; public RoutePathOverlay(List<GeoPoint> points) { this(points, Color.RED, false); } public RoutePathOverlay(List<GeoPoint> points, int pathColor, boolean drawStartEnd) { _points = points; _pathColor = pathColor; _drawStartEnd = drawStartEnd; } private void drawOval(Canvas canvas, Paint paint, Point point) { Paint ovalPaint = new Paint(paint); ovalPaint.setStyle(Paint.Style.FILL_AND_STROKE); ovalPaint.setStrokeWidth(2); int _radius = 6; RectF oval = new RectF(point.x - _radius, point.y - _radius, point.x + _radius, point.y + _radius); canvas.drawOval(oval, ovalPaint); } public boolean draw(Canvas canvas, MapView mapView, boolean shadow, long when) { Projection projection = mapView.getProjection(); if (shadow == false && _points != null) { Point startPoint = null, endPoint = null; Path path = new Path(); // We are creating the path for (int i = 0; i < _points.size(); i++) { GeoPoint gPointA = _points.get(i); Point pointA = new Point(); projection.toPixels(gPointA, pointA); if (i == 0) { // This is the start point startPoint = pointA; path.moveTo(pointA.x, pointA.y); } else { if (i == _points.size() - 1)// This is the end point endPoint = pointA; path.lineTo(pointA.x, pointA.y); } } Paint paint = new Paint(); paint.setAntiAlias(true); paint.setColor(_pathColor); paint.setStyle(Paint.Style.STROKE); paint.setStrokeWidth(3); paint.setAlpha(90); if (getDrawStartEnd()) { if (startPoint != null) { drawOval(canvas, paint, startPoint); } if (endPoint != null) { drawOval(canvas, paint, endPoint); } } if (!path.isEmpty()) canvas.drawPath(path, paint); } return super.draw(canvas, mapView, shadow, when); } public boolean getDrawStartEnd() { return _drawStartEnd; } public void setDrawStartEnd(boolean markStartEnd) { _drawStartEnd = markStartEnd; } } MyMapActivity package net.danielkvist; import java.util.ArrayList; import java.util.Collection; import java.util.Iterator; import android.graphics.Color; import android.os.Bundle; import android.util.Log; import com.google.android.maps.GeoPoint; import com.google.android.maps.MapActivity; import com.google.android.maps.MapView; public class MyMapActivity extends MapActivity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); MapView mapView = (MapView) findViewById(R.id.mapview); mapView.setBuiltInZoomControls(true); String url = "http://danielkvist.net/cprg_bef_cbana_polyline_simp1600.kml"; NavigationDataSet set = MapService.getNavigationDataSet(url); drawPath(set, Color.parseColor("#6C8715"), mapView); } /** * Does the actual drawing of the route, based on the geo points provided in * the nav set * * @param navSet * Navigation set bean that holds the route information, incl. * geo pos * @param color * Color in which to draw the lines * @param mMapView01 * Map view to draw onto */ public void drawPath(NavigationDataSet navSet, int color, MapView mMapView01) { ArrayList<GeoPoint> geoPoints = new ArrayList<GeoPoint>(); Collection overlaysToAddAgain = new ArrayList(); for (Iterator iter = mMapView01.getOverlays().iterator(); iter.hasNext();) { Object o = iter.next(); Log.d(BikeApp.APP, "overlay type: " + o.getClass().getName()); if (!RouteOverlay.class.getName().equals(o.getClass().getName())) { overlaysToAddAgain.add(o); } } mMapView01.getOverlays().clear(); mMapView01.getOverlays().addAll(overlaysToAddAgain); int totalNumberOfOverlaysAdded = 0; for(Placemark placemark : navSet.getPlacemarks()) { String path = placemark.getCoordinates(); if (path != null && path.trim().length() > 0) { String[] pairs = path.trim().split(" "); String[] lngLat = pairs[0].split(","); // lngLat[0]=longitude // lngLat[1]=latitude // lngLat[2]=height try { if(lngLat.length > 1 && !lngLat[0].equals("") && !lngLat[1].equals("")) { GeoPoint startGP = new GeoPoint( (int) (Double.parseDouble(lngLat[1]) * 1E6), (int) (Double.parseDouble(lngLat[0]) * 1E6)); GeoPoint gp1; GeoPoint gp2 = startGP; geoPoints = new ArrayList<GeoPoint>(); geoPoints.add(startGP); for (int i = 1; i < pairs.length; i++) { lngLat = pairs[i].split(","); gp1 = gp2; if (lngLat.length >= 2 && gp1.getLatitudeE6() > 0 && gp1.getLongitudeE6() > 0 && gp2.getLatitudeE6() > 0 && gp2.getLongitudeE6() > 0) { // for GeoPoint, first:latitude, second:longitude gp2 = new GeoPoint( (int) (Double.parseDouble(lngLat[1]) * 1E6), (int) (Double.parseDouble(lngLat[0]) * 1E6)); if (gp2.getLatitudeE6() != 22200000) { geoPoints.add(gp2); } } } totalNumberOfOverlaysAdded++; mMapView01.getOverlays().add(new RoutePathOverlay(geoPoints)); } } catch (NumberFormatException e) { Log.e(BikeApp.APP, "Cannot draw route.", e); } } } Log.d(BikeApp.APP, "Total overlays: " + totalNumberOfOverlaysAdded); mMapView01.setEnabled(true); } @Override protected boolean isRouteDisplayed() { // TODO Auto-generated method stub return false; } } Edit: There are of course some more files I'm using but that I have not posted. You can download the complete Eclipse project here: http://danielkvist.net/se.zip

    Read the article

  • Possibility of a custom Contacts field with a set list of values and Contacts lookup performance

    - by areyling
    I'm pretty sure it's not viable to do what I'd like to based on some initial research, but I figured it couldn't hurt to ask the community of experts here in case someone knows a way. I'd like to create a custom field for contacts that the user is able to edit from the main Contacts app; however, the user should only be allowed to select from a list of four specific values. A short list of string values would be ideal, but an int with a min/max range would suffice. I'm interested in knowing if it's possible either way, but also wondering if it make sense to go this route performance wise. More specifically, would it be better to look up a contact (based on a phone number) each time a call or SMS message is received or better to store my own set of data (consisting of name, numbers, and the custom field) and just syncing contact info in a thread every so often? Or syncing contacts the first time the app is run and then registering for changes using ContentObserver? Here is a similar question with an answer that explains how to add a custom field to a contact. Thanks in advance.

    Read the article

  • How to display many SVGs in Java with high performance

    - by Oak
    What I want My goal is to be able to display a large number of SVG images on a single drawing area in Java, each with its own translation/rotation/scale values. I'm looking for the simplest solution allowing this, optionally even using OpenGL to speed things up. What I've Tried My initial naive approach was to use SVGSalamander to draw directly on a JPanel, but the performance was pathetic. I poked around around and learned that I should do something like manually convert each SVG into a BufferedImage created with createCompatibleImage, then do the transformations I want, then draw it using double buffering. I ran into some troubles here, and before I continued I tried looking for frameworks to simplify things. What I've Looked At I've been a bit overwhelmed by the available options, which is why I'm turning to SO for help. I've looked at: Cairo (with Glitz maybe?) Libart - not sure if this actually supports SVGs FengGUI Slick - looks promising but a bit of an overkill But couldn't decide what is best for me to start working with, and I hope someone here as experience with any of these doing similar things.

    Read the article

  • Performance of Multiple Joins

    - by geeko
    Greetings Overflowers, I need to query against objects with many/complex spacial conditions. In relational databases that is translated to many joins (possibly 10+). I'm new to this business and wondering whether to go with MS SQL Server 2008 R2 or Oracle 11g or document-based solutions such as RavenDB or simply go with some spacial database (GIS)... Any thoughts ? Regards UPDATE: Thank you all for your answers. Would anybody opt for document/spatial databases ? My database would consist of tens of millions to few billion records. Mostly read-only. Almost no updates unless in case of mistakes in input. Overnight inserts and not that frequent. The join tables are predicted beforehand but the number of self joins (tables joining themselves multiple times) is not. Small pages of results from such queries are going to be viewed on an highly interactive website so response time is critical. Any predictions on how this can perform on MS SQL Server 2008 R2 or Oracle 11g ? I'm also concerned about boosting performance by adding more servers, which one scales better ? How about PostgresQL ?

    Read the article

  • Javascript, IE, Strings, and Performance problems

    - by Infinity
    Hey guys, So we have this product, and it's really slow in IE. We've already applied a lot of the practices advised by the IE guys themselves (like this, and this), and try to sacrifice clean code for performance in the critical parts like DOM manipulation. However, as you can see in this IE profiler screenshot.. Just "String" is the biggest offender. Almost 750ms of exclusive time. Does this mean IE is spending 750ms just instantiating Strings? I also read this stuff on the Opera dev blog: A build script can remove whitespace, comments, replace strings with Array lookups (to avoid MSIE creating a string object for every single instance of a string — even in conditions) But no more info regarding this. Anyone can clarify? It seems like IE has to create a full String instance every time you have " " in your code, which could explain this, but I don't know what the array lookup optimization would look like. BTW- we don't really do much of string concatenation anywhere in the code. The library we use is MooTools 1.2.4 Any suggestions will be appreciated! Thx

    Read the article

  • Why doesn't this CompiledQuery give a performance improvement?

    - by Grammarian
    I am trying to speed up an often used query. Using a CompiledQuery seemed to be the answer. But when I tried the compiled version, there was no difference in performance between the compiled and non-compiled versions. Can someone please tell me why using Queries.FindTradeByTradeTagCompiled is not faster than using Queries.FindTradeByTradeTag? static class Queries { // Pre-compiled query, as per http://msdn.microsoft.com/en-us/library/bb896297 private static readonly Func<MyEntities, int, IQueryable<Trade>> mCompiledFindTradeQuery = CompiledQuery.Compile<MyEntities, int, IQueryable<Trade>>( (entities, tag) => from trade in entities.TradeSet where trade.trade_tag == tag select trade); public static Trade FindTradeByTradeTagCompiled(MyEntities entities, int tag) { IQueryable<Trade> tradeQuery = mCompiledFindTradeQuery(entities, tag); return tradeQuery.FirstOrDefault(); } public static Trade FindTradeByTradeTag(MyEntities entities, int tag) { IQueryable<Trade> tradeQuery = from trade in entities.TradeSet where trade.trade_tag == tag select trade; return tradeQuery.FirstOrDefault(); } }

    Read the article

  • Performance of inter-database query (between linked servers)

    - by Swoosh
    I have an import between 2 linked servers. I basically got to get the data from a multiple join into a table on my side. The current query is something like this: select a.* from db1.dbo.tbl1 a inner join db1.dbo.tbl2 on ... inner join db1.dbo.tbl3 on ... inner join db1.dbo.tbl4 on ... inner join db2.dbo.myside on ... db1 = linked server db2 = my own database After this one, I am using an insert into + select to add this data in my table which is located in db2. (usually few hundred records - this import running once a minute) My question is related to performance. The tables on the linked server (tbl1, tbl2, tbl3, tbl4) are huge tables, with millions of records, and it is slowing down the import process. I was told that, if I do the join on the "other" side (db1 - linked server) for example in a stored procedure, than, even if the query looks the same, it would run faster. Is that right? This is kinda hard to test. Note that the join contains a table from my database too. Also. are there other "tricks" I could use in order to make this run faster? Thanks

    Read the article

  • Performance when querying a View

    - by Nate Bross
    I'm wondering if this is a bad practice or if in general this is the correct approach. Lets say that I've created a view that combines a few attributes from a few tables. My question, what do I need to do so I can query against this view as if it were a table without worrying about performance? All attributes in the original tables are indexed, my concern is that the result view will have hundreds of thousands of records, which I will want to narrow down quite a bit based on user input. What I'd like to avoid, is having multiple versions of the code that generates this view floating around with a few extra "where" conditions to facilitate the user input filtering. For example, assume my view has this header VIEW(Name, Type, DateEntered) this may have 100,000+ rows (possibly millions). I'd like to be able to make this view in SQL Server, and then in my application write querlies like this: SELECT Name, Type, DateEntered FROM MyView WHERE DateEntered BETWEEN @date1 and @date2; Basically, I am denormalizing my data for a series of reports that need to be run, and I'd like to centralize where I pull the data from, maybe I'm not looking at this problem from the right angle though, so I'm open to alternative ways to attack this.

    Read the article

  • Determining Best Table Structure for MySQL Performance

    - by Joe Majewski
    I'm working on a browser-based RPG for one of my websites, and right now I'm trying to determine the best way to organize my SQL tables for performance and maintenance. Here's my question: Does the number of columns in an SQL table affect the speed in which it can be queried? I am not a newbie when it comes to PHP or MySQL. I used to develop things with the common goal of getting them to work, but I've recently advanced to the stage where a functional program is not good enough unless it's fast and reliable. Anyways, right now I have a members table that has around 15 columns. It contains information such as the player's username, password, email, logins, page views, etcetera. It doesn't contain any information on the player's progress in the game, however. If I added columns for things such as army size, gold, turns, and whatnot, then it could easily rise to around 40 or 50 total columns. Oh, and my database structure IS normalized. Will a table with 50 columns that gets constantly queried be a bad idea? Should I split it into two tables; one for the user's general information and one for the user's game statistics? I know I could check the query time myself, but I haven't actually created the tables yet and I think I'd be better off with some professional advice on this important decision for my game. Thank you for your time! :)

    Read the article

  • Performance issues when using SSD for a developer notebook (WAMP/LAMP stack)?

    - by András Szepesházi
    I'm a web application developer using my notebook as a standalone development environment (WAMP stack). I just switched from a Core2-duo Vista 32 bit notebook with 2Gb RAM and SATA HDD, to an i5-2520M Win7 64 bit with 4Gb RAM and 128 GB SDD (Corsair P3 128). My initial experience was what I expected, fast boot, quick load of all the applications (Eclipse takes now 5 seconds as opposed to 30s on my old notebook), overall great experience. Then I started to build up my development stack, both as LAMP (using VirtualBox with a debian guest) and WAMP (windows native apache + mysql + php). I wanted to compare those two. This still all worked great out, then I started to pull in my projects to these stacks. And here came the nasty surprise, one of those projects produced a lot worse response times than on my old notebook (that was true for both the VirtualBox and WAMP stack). Apache, php and mysql configurations were practically identical in all environments. I started to do a lot of benchmarking and profiling, and here is what I've found: All general benchmarks (Performance Test 7.0, HDTune Pro, wPrime2 and some more) gave a big advantage to the new notebook. Nothing surprising here. Disc specific tests showed that read/write operations peaked around 380M/160M for the SSD, and all the different sized block operations also performed very well. Started apache performance benchmarking with Apache Benchmark for a small static html file (10 concurrent threads, 500 iterations). Old notebook: min 47ms, median 111ms, max 156ms New WAMP stack: min 71ms, median 135ms, max 296ms New LAMP stack (in VirtualBox): min 6ms, median 46ms, max 175ms Right here I don't get why the native WAMP stack performed so bad, but at least the LAMP environment brought the expected speed. Apache performance measurement for non-cached php content. The php runs a loop of 1000 and generates sha1(uniqid()) inisde. Again, 10 concurrent threads, 500 iterations were used for the benchmark. Old notebook: min 0ms, median 39ms, max 218ms New WAMP stack: min 20ms, median 61ms, max 186ms New LAMP stack (in VirtualBox): min 124ms, median 704ms, max 2463ms What the hell? The new LAMP performed miserably, and even the new native WAMP was outperformed by the old notebook. php + mysql test. The test consists of connecting to a database and reading a single record form a table using INNER JOIN on 3 more (indexed) tables, repeated 100 times within a loop. Databases were identical. 10 concurrent threads, 100 iterations were used for the benchmark. Old notebook: min 1201ms, median 1734ms, max 3728ms New WAMP stack: min 367ms, median 675ms, max 1893ms New LAMP stack (in VirtualBox): min 1410ms, median 3659ms, max 5045ms And the same test with concurrency set to 1 (instead of 10): Old notebook: min 1201ms, median 1261ms, max 1357ms New WAMP stack: min 399ms, median 483ms, max 539ms New LAMP stack (in VirtualBox): min 285ms, median 348ms, max 444ms Strictly for my purposes, as I'm using a self contained development environment (= low concurrency) I could be satisfied with the second test's result. Though I have no idea why the VirtualBox environment performed so bad with higher concurrency. Finally I performed a test of including many php files. The application that I mentioned at the beginning, the one that was performing so bad, has a heavy bootstrap, loads hundreds of small library and configuration files while initializing. So this test does nothing else just includes about 100 files. Concurrency set to 1, 100 iterations: Old notebook: min 140ms, median 168ms, max 406ms New WAMP stack: min 434ms, median 488ms, max 604ms New LAMP stack (in VirtualBox): min 413ms, median 1040ms, max 1921ms Even if I consider that VirtualBox reached those files via shared folders, and that slows things down a bit, I still don't see how could the old notebook outperform so heavily both new configurations. And I think this is the real root of the slow performance, as the application uses even more includes, and the whole bootstrap will occur several times within a page request (for each ajax call, for example). To sum it up, here I am with a brand new high-performance notebook that loads the same page in 20 seconds, that my old notebook can do in 5-7 seconds. Needless to say, I'm not a very happy person right now. Why do you think I experience these poor performance values? What are my options to remedy this situation?

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Tokyo Tyrant ulog / update log management.

    - by Nathan Milford
    I'm testing Tokyo Tyrant in a master-master setup and have found the ulog grows out of control and locks up the disk. At first I found the -ulim option useful and limited the logfile size, however it simply rolls over to a new log, leaving the old ones to clutter up the partition. I suppose I'll write a shell script that will delete ulogs older than X, once I find out how far back Tokyo Tyrant needs in the update log in order to failover. Does anyone have any experience with this Tokyo Tyrant? Do you have a feel (acknowledging that every install is different based on what is being stored) for the optimal ulog size vs how far back a Tokyo Tyrant instance needs to look in the ulog to assume master status? Thanks, nathan

    Read the article

  • How can you know what is w3wp.exe doing? (or how to diagnose a performance problem)

    - by Daniel Magliola
    I'm having a performance problem in a site we've made, and I'm not exactly sure how to start diagnosing it. The short description is: We have a very small site (http://hearablog.com) with very little traffic, in a crappy dedicated server, CPU is always very high, sometimes it stays at 100% for minutes, and w3wp.exe is taking most of it. A typical scenario is w3wp.exe takes 60%, and SQL Server takes about 30%. Our DB is pretty small too. Long description and more details: The site is hosted in a very crappy server by Cari.Net. From the beginning we had the feeling that the server didn't quite behave correctly, like some things would take just too long, so this could be a configuration problem from the get go. It may also be that we are getting a virtual server while we're supposed to have a dedicated one, although we have no evidence that'd indicate this, except for the fact that the server tends to be quite slow. The server is Windows 2008 Standard 64-bit, with SQL 2008 Express Hardware is a Celeron 2.80 GHz, 1Gb RAM The website is developed in ASP.Net MVC, using Entity Framework for data access. Now, this is pretty crappy hardware, but i've had other servers with these guys, with equivalent (or worse) HW, and performance is much better than this one. That said, the other servers have W2003 and SQL2005, and I'm using ASP.Net "WebForms" 2.0, no MVC, no LINQ, no EF; so I'm not sure whether going to 2008 / the other stuff means a big performance penalty is expected. I'm serving MP3 files (5-20 Mb) regularly, which is a slightly unusual load, maybe that is causing some kind of problems? Would that cause w3wp to use a lot of CPU? Disk usage seems very low. Memory is usually around 90%, but disk usage seems to indicate it's not paging much. I get tons of e-mails every day about SQL timeouts, for queries taking over 30 seconds, although all our queries are pretty straightforward (or should be, but EF may be screwing it up). This is what resource monitor looks like in one of these "sprints" of 100% CPU, in case there's anything useful there. And a snapshot of some performance counters: Now, what confuses me very much is that CPU usage of w3wp is just so high. It shouldn't be doing much really... So my questions are... Is there any way of finding out "what" it is doing? Maybe even profile it? Any performance counters I should be looking at? Is this to be expected given this hardware/software configuration? Is this could be cause by some kind of configuration failure, where would you start looking? Thank you VERY much. Daniel Magliola

    Read the article

  • C# memory management: unsafe keyword and pointers

    - by Alerty
    What are the consequences (positive/negative) of using the unsafe keyword in C# to use pointers? For example, what becomes of garbage collection, what are the performance gains/losses, what are the performance gains/losses compared to other languages manual memory management, what are the dangers, in which situation is it really justifiable to make use of this language feature... ?

    Read the article

  • SNMP based network discovery (switches), device (ports on switches) power management

    - by SaM
    In a enterprise network, what would be the right way to generate a list of switches (SNMP managed) Is it reasonable to ask the organization to supply a list such as this: Switch name IP Address of switch Location SNMP community strings Or are there standard ways to run discovery scans - UDP broadcasts? After having generated a repository such as the above; given a single switch, how to query it for the list of all devices attached to it? Finally, how to selectively power down/power up ports? (remotely - using SNMP) Platform is going to be .NET based (C#) and the library being used is SharpSNMP

    Read the article

  • Digital Asset Management, iPhoto / Aperture server... alternative

    - by Sisyphus
    Afternoon, Clients, 10 : All Apples running either Leopard or Snow Leopard Server : Snow Leopard server, (and I have a old Dell Poweredge 650 at home running Gentoo 2.6, if anybody as a Linux solution). The situation: I work in small design company with 8 people, at present we are looking to consolidate all our image files onto one location, at present we each use our preferred single user DAM solution, be it, Adobe Bridge, iPhoto/Aperture (some don't bother at all) The filetypes commonly used are .psd, .pdf, .eps, .tiff, .jpg and RAW image files. Ideally what is needed: Centralised on one server, but allows us to search via spotlight (not essential, but would be nice) Include searchable metadata information such as date, location, title Open-source or as low cost as possibly Allow simultaneous users to import files So far, I have looked at a few open source DAM, systems, such as Razuna, Gallery (not strictly DAM), ResourceSpace, Notre-DAM, while these are brilliant and open-source, they don't integrate as smoothly with the Desktop as iPhoto and aperture. For iPhoto and aperture, I have tried creating a Shared library on the server (a tad laggy), and also using a drive with no permissions, put a library and letting each client read from it, however if they want to put images onto the library only, it's only supports one user at a time writing to the library... Any ideas what could fulfill our needs? Or is it time to bite the bullet for FinalCut Server? Thanks in advance.

    Read the article

  • Network Management Cable Labeling Techniques and their alternatives [closed]

    - by Alex
    Possible Duplicate: What is the most effective solution you used to label cables? Yes i know there are a lot of howtos and already answered questions about this topic, like this one: How do you organise the cables in your racks? Currently i am searching the web for different techniques (alternatives) for labeling the cables at the server racks and/or data centers. Unfortunately i do not have any experience with labeling/documentation of network cables in a large scale. As far as I could lookup by now the current labeling techniques are coloring and a self defined print-labeling technique (numbering, text) maybe also according to a standard which are usually used. I want to know if QR, RFID (ok RFID in a data center would be stupid due to the radio frequency wouldn't it be?), Barcodes or similar (??) have already been used by some administrators or why they did not consider such techniques at all? Too complicated (with QR scanner etc..) if you are in front of the cables and want to get quick feedback for what the cable is? What alternatives are out there? Advantages/Disadvantages? Best-Practice? I would appreciate any help on this topic, thank you! Regards, Alex

    Read the article

  • Change Management Software

    - by Andrew
    I manage an 80,000 user CIS application written in Uniface. Every form in the application, and many of its processes, are represented by .frm files. We have hundreds of these files and 5 instances of the application. Instances include multiple production installations which must be kept sync'd. We do not get MD5 from our vendor for files that are released to us as patches. We have been using a spreadsheet to track changes, but this is far from ideal. Is there a commercial application that can be purchased that will allow us to track changes to the instances? Thank you all! EDIT: Patches are released as zip files with either FRM files in them or SQL files or a mix of both. SQL files will contain statements that need to be run in Oracle. Patches are also assigned unique patch numbers.

    Read the article

  • Unix Password Management Keyring

    - by Phil
    I am looking for a password manager for a command-line Unix environment. So far all I can find are keyring applications for Windows, Linux, and Mac. But no command-line Unix interfaces. My main goal is to be able to access a password keyring through an SSH connection to a machine that has no graphical user interface. If there are no good unix password keyrings out there, what would be a better way to store personal passwords in a central location?

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >