Search Results

Search found 40567 results on 1623 pages for 'database performance'.

Page 310/1623 | < Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >

  • Cost of logic in a query

    - by FrustratedWithFormsDesigner
    I have a query that looks something like this: select xmlelement("rootNode", (case when XH.ID is not null then xmlelement("xhID", XH.ID) else xmlelement("xhID", xmlattributes('true' AS "xsi:nil"), XH.ID) end), (case when XH.SER_NUM is not null then xmlelement("serialNumber", XH.SER_NUM) else xmlelement("serialNumber", xmlattributes('true' AS "xsi:nil"), XH.SER_NUM) end), /*repeat this pattern for many more columns from the same table...*/ FROM XH WHERE XH.ID = 'SOMETHINGOROTHER' It's ugly and I don't like it, and it is also the slowest executing query (there are others of similar form, but much smaller and they aren't causing any major problems - yet). Maintenance is relatively easy as this is mostly a generated query, but my concern now is for performance. I am wondering how much of an overhead there is for all of these case expressions. To see if there was any difference, I wrote another version of this query as: select xmlelement("rootNode", xmlforest(XH.ID, XH.SER_NUM,... (I know that this query does not produce exactly the same, thing, my plan was to move the logic to PL/SQL or XSL) I tried to get execution plans for both versions, but they are the same. I'm guessing that the logic does not get factored into the execution plan. My gut tells me the second version should execute faster, but I'd like some way to prove that (other than writing a PL/SQL test function with timing statements before and after the query and running that code over and over again to get a test sample). Is it possible to get a good idea of how much the case-when will cost? Also, I could write the case-when using the decode function instead. Would that perform better (than case-statements)?

    Read the article

  • I just discovered why all ASP.Net websites are slow, and I am trying to work out what to do about it

    - by James
    I just discovered that every request in an ASP.Net web application gets a Session lock at the begging of a request, and then releases it at the end of the request!!! I mean, WTF Microsoft! In case the implication is lost on you, as it was from me at first, this basically means the following: Anytime an ASP.Net webpage is taking a long time to load (maybe due to a slow database call or whatever), and the user decides they want to navigate to a different page because they are tired of waiting, THEY CANT! The ASP.Net session lock forces the new page request to wait until the original request has finished its painfully slow load. Arrrgh. Anytime an UpdatePanel is loading slowly, and the user decides to navigate to a different page before the UpdadePanel has finished updating... THEY CANT! The ASP.Net session lock forces the new page request to wait until the original request has finished its painfully slow load. Double Arrrgh! So what are the options? So far I have come up with: Implement a Custom SessionStateDataStore, which ASP.Net supports. I haven't found too many out there to copy, and it seems kind of high risk and easy to mess up. Keep track of all requests in progress, and if a request comes in from the same user, cancel the original request. Seems kind of extreme, but it would work (I think) Don't user Session! When I need some kind of state for the user, I could just user Cache instead, and key items on the authenticated user's name, or some such thing. Again seems kind of extreme I really can't believe that the ASP.Net Microsoft team would have left such a huge performance bottleneck in the framework at version 4.0! Am I missing something obvious? How hard would it be to use a ThreadSafe collection for the Session? Arrrrghhhhhh. Any advice much appreciated.

    Read the article

  • how do I block my rails app from being hit by bots?

    - by codeman73
    I'm not even sure I'm using the right terminology, whether this is actually bots or not. I didn't want to use the word 'spam' because it's not like I have comments or posts that are being created/spammed. It looks more like something is making the same repeated request to my domain, which is what made me think it was some kind of bot. I've opened my first rails app to the 'public', which is a really a small group of users, <50 currently. That was last Friday. I started having performance issues today, so I looked at the log and I see tons of these RoutingErrors ActionController::RoutingError (No route matches "/portalApp/APF/pages/business/util/whichServer.jsp" with {:method=>:get}): They are filling up the log and I'm assuming this is causing the slowdown. Note the .jsp on the end and this is a rails app, so I've got no urls remotely like this in my app. I mean, the /portalApp I don't even have, so I don't know where this is coming from. This is hosted at Dreamhost and I chatted with one of their support people, and he suggested a couple sites that detail using htaccess to block things. But that looks like you need to know the IP or domain that the requests are coming from, which I don't. How can I block this? How can I find the IP or domain from the request? Any other suggestions?

    Read the article

  • iPhone: Which are the most useful techniques for faster Bluetooth?

    - by Mike Howard
    Hi. I'm adding peer-to-peer bluetooth using GameKit to an iPhone shoot-em-up, so speed is vital. I'm sending about 40 messages a second each way, most of them with the faster GKSendDataUnreliable, all serializing with NSCoding. In testing between a 3G and 3GS, this is slowing the 3G down a lot more than I'd like. I'm wondering where I should concentrate my efforts to speed it up. How much slower is GKSendDataReliable? For the few packets that have to get there, would it be faster to send a GKSendDataUnreliable and have the peer send an acknowledgement so I can send again if I don't get the Ack within, say, 100ms? How much faster would it be to create the NSData instance using a regular C array rather than archiving with the NSCoding protocol? Is this serialization process (for about a dozen floats) just as slow as you'd expect from an object creation/deallocation overhead, or is something particularly slow happening? I heard that (for example) sending four seperate sets of data is much, much slower, than sending one piece of data four times the size. Would I make a significant saving by sending separate packets of data that wouldn't always go together in the same packet when they happen at the same time? Are there any other bluetooth performance secrets I've missed? Thanks for your help.

    Read the article

  • Should try...catch go inside or outside a loop?

    - by mmyers
    I have a loop that looks something like this: for(int i = 0; i < max; i++) { String myString = ...; float myNum = Float.parseFloat(myString); myFloats[i] = myNum; } This is the main content of a method whose sole purpose is to return the array of floats. I want this method to return null if there is an error, so I put the loop inside a try...catch block, like this: try { for(int i = 0; i < max; i++) { String myString = ...; float myNum = Float.parseFloat(myString); myFloats[i] = myNum; } } catch (NumberFormatException ex) { return null; } But then I also thought of putting the try...catch block inside the loop, like this: for(int i = 0; i < max; i++) { String myString = ...; try { float myNum = Float.parseFloat(myString); } catch (NumberFormatException ex) { return null; } myFloats[i] = myNum; } So my question is: is there any reason, performance or otherwise, to prefer one over the other? EDIT: The consensus seems to be that it is cleaner to put the loop inside the try/catch, possibly inside its own method. However, there is still debate on which is faster. Can someone test this and come back with a unified answer? (EDIT: did it myself, but voted up Jeffrey and Ray's answers)

    Read the article

  • Which are the most useful techniques for faster Bluetooth?

    - by Mike Howard
    Hi. I'm adding peer-to-peer bluetooth using GameKit to an iPhone shoot-em-up, so speed is vital. I'm sending about 40 messages a second each way, most of them with the faster GKSendDataUnreliable, all serializing with NSCoding. In testing between a 3G and 3GS, this is slowing the 3G down a lot more than I'd like. I'm wondering where I should concentrate my efforts to speed it up. How much slower is GKSendDataReliable? For the few packets that have to get there, would it be faster to send a GKSendDataUnreliable and have the peer send an acknowledgement so I can send again if I don't get the Ack within, say, 100ms? How much faster would it be to create the NSData instance using a regular C array rather than archiving with the NSCoding protocol? Is this serialization process (for about a dozen floats) just as slow as you'd expect from an object creation/deallocation overhead, or is something particularly slow happening? I heard that (for example) sending four seperate sets of data is much, much slower, than sending one piece of data four times the size. Would I make a significant saving by sending separate packets of data that wouldn't always go together in the same packet when they happen at the same time? Are there any other bluetooth performance secrets I've missed? Thanks for your help.

    Read the article

  • MySQL: optimization of table (indexing, foreign key) with no primary keys

    - by Haradzieniec
    Each member has 0 or more orders. Each order contains at least 1 item. memberid - varchar, not integer - that's OK (please do not mention that's not very good, I can't change it). So, thera 3 tables: members, orders and order_items. Orders and order_items are below: CREATE TABLE `orders` ( `orderid` INT(11) UNSIGNED NOT NULL AUTO_INCREMENT, `memberid` VARCHAR( 20 ), `Time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP , `info` VARCHAR( 3200 ) NULL , PRIMARY KEY (orderid) , FOREIGN KEY (memberid) REFERENCES members(memberid) ) ENGINE = InnoDB; CREATE TABLE `order_items` ( `orderid` INT(11) UNSIGNED NOT NULL, `item_number_in_cart` tinyint(1) NOT NULL , --- 5 items in cart= 5 rows `price` DECIMAL (6,2) NOT NULL, FOREIGN KEY (orderid) REFERENCES orders(orderid) ) ENGINE = InnoDB; So, order_items table looks like: orderid - item_number_in_cart - price: ... 1000456 - 1 - 24.99 1000456 - 2 - 39.99 1000456 - 3 - 4.99 1000456 - 4 - 17.97 1000457 - 1 - 20.00 1000458 - 1 - 99.99 1000459 - 1 - 2.99 1000459 - 2 - 69.99 1000460 - 1 - 4.99 ... As you see, order_items table has no primary keys (and I think there is no sense to create an auto_increment id for this table, because once we want to extract data, we always extract it as WHERE orderid='1000456' order by item_number_in_card asc - the whole block, id woudn't be helpful in queries). Once data is inserted into order_items, it's not UPDATEd, just SELECTed. The questions are: I think it's a good idea to put index on item_number_in_cart. Could anybody please confirm that? Is there anything else I have to do with order_items to increase the performance, or that looks pretty good? I could miss something because I'm a newbie. Thank you in advance.

    Read the article

  • Why would 1.000 subforms in a db be a bad idea?

    - by KlaymenDK
    Warm-up I'm trying to come up with a good way to implement customized document forms. It's for a tool to request access to applications; each application will want to ask its own specific questions. The thing is, we have one kind of (common) user who needs to fill in and submit documents based on templates, and another kind of (super) user who needs to be able to define what each template needs to contain. One implementation option would be to use a form (with the basic mandatory stuff), and have that form dynamically include a subform appropriate to the specific task at hand. The gist of the matter is that we could (=will!) quite easily end up having many hundreds of different subforms! (NB. These subforms will be maintained in an automated manner, but that is another topic that may be considered outside the scope of this Question.) Question It's common knowledge that having a lot of views in a Notes database is Bad Thing. But has anyone tried pushing the number of forms or subforms and made any experiences regarding performance?

    Read the article

  • Best Approach for Checking and Inserting Records

    - by nevets1219
    In one of our existing C programs which purpose is: Open connection to DB for record in all_record: if record contain certain data: if record is NOT in table A: // see #1 insert record information into table A and B // see #2 Close connection to DB select field from table where field=XXX 2 inserts This is typically done every X months to sync everything up or so I'm told. I've also been told that this process takes roughly a couple of days. There is (currently) at most 2.5million records (though not necessarily all 2.5m will be inserted). One of the table contains 10 fields and the other 5 fields. There isn't much to be done about iterating through the records since that part can't be changed at the moment. What I would like to do is speed up the part where I query MySQL. I'm not sure if I have left out any important details -- please let me know! I'm also no SQL expert so feel free to point out the obvious. I thought about: Putting all the inserts into a transaction (at the moment I'm not sure how important it is for the transaction to be all-or-none or if this affects performance) Using Insert X Where Not Exists Y LOAD DATA INFILE (but that would require I create a (possibly) large temp file) I read that (hopefully someone can confirm) I should drop indexes so they aren't re-calculated. mysql Ver 14.7 Distrib 4.1.22, for sun-solaris2.10 (sparc) using readline 4.3

    Read the article

  • How to get results efficiently out of an Octree/Quadtree?

    - by Reveazure
    I am working on a piece of 3D software that has sometimes has to perform intersections between massive numbers of curves (sometimes ~100,000). The most natural way to do this is to do an N^2 bounding box check, and then those curves whose bounding boxes overlap get intersected. I heard good things about octrees, so I decided to try implementing one to see if I would get improved performance. Here's my design: Each octree node is implemented as a class with a list of subnodes and an ordered list of object indices. When an object is being added, it's added to the lowest node that entirely contains the object, or some of that node's children if the object doesn't fill all of the children. Now, what I want to do is retrieve all objects that share a tree node with a given object. To do this, I traverse all tree nodes, and if they contain the given index, I add all of their other indices to an ordered list. This is efficient because the indices within each node are already ordered, so finding out if each index is already in the list is fast. However, the list ends up having to be resized, and this takes up most of the time in the algorithm. So what I need is some kind of tree-like data structure that will allow me to efficiently add ordered data, and also be efficient in memory. Any suggestions?

    Read the article

  • SQL Server 2005 FREETEXT() Perfomance Issue

    - by Zenon
    I have a query with about 6-7 joined tables and a FREETEXT() predicate on 6 columns of the base table in the where. Now, this query worked fine (in under 2 seconds) for the last year and practically remained unchanged (i tried old versions and the problem persists) So today, all of a sudden, the same query takes around 1-1.5 minutes. After checking the Execution Plan in SQL Server 2005, rebuilding the FULLTEXT Index of that table, reorganising the FULLTEXT index, creating the index from scratch, restarting the SQL Server Service, restarting the whole server I don't know what else to try. I temporarily switched the query to use LIKE instead until i figure this out (which takes about 6 seconds now). When I look at the query in the query performance analyser, when I compare the ´FREETEXT´query with the ´LIKE´ query, the former has 350 times as many reads (4921261 vs. 13943) and 20 times (38937 vs. 1938) the CPU usage of the latter. So it really is the ´FREETEXT´predicate that causes it to be so slow. Has anyone got any ideas on what the reason might be? Or further tests I could do?

    Read the article

  • Java reflection Method invocations yield result faster than Fields?

    - by omerkudat
    I was microbenchmarking some code (please be nice) and came across this puzzle: when reading a field using reflection, invoking the getter Method is faster than reading the Field. Simple test class: private static final class Foo { public Foo(double val) { this.val = val; } public double getVal() { return val; } public final double val; // only public for demo purposes } We have two reflections: Method m = Foo.class.getDeclaredMethod("getVal", null); Field f = Foo.class.getDeclaredField("val"); Now I call the two reflections in a loop, invoke on the Method, and get on the Field. A first run is done to warm up the VM, a second run is done with 10M iterations. The Method invocation is consistently 30% faster, but why? Note that getDeclaredMethod and getDeclaredField are not called in the loop. They are called once and executed on the same object in the loop. I also tried some minor variations: made the field non-final, transitive, non-public, etc. All of these combinations resulted in statistically similar performance. Edit: This is on WinXP, Intel Core2 Duo, Sun JavaSE build 1.6.0_16-b01, running under jUnit4 and Eclipse.

    Read the article

  • What is the best way to add two strings together?

    - by Pim Jager
    I read somewehere (I thought on codinghorror) that it is bad practice to add strings together as if they are numbers, since like numbers, strings cannot be changed. Thus, adding them together creates a new string. So, I was wondering, what is the best way to add two strings together, when focusing on performance? Which of these four is better, or is there another way which is better? //Note that normally at least one of these two strings is variable $str1 = 'Hello '; $str2 = 'World!'; $output1 = $str1.$str2; //This is said to be bad $str1 = 'Hello '; $output2 = $str1.'World!'; //Also bad $str1 = 'Hello'; $str2 = 'World!'; $output3 = sprintf('%s %s', $str1, $str2); //Good? //This last one is probaply more common as: //$output = sprintf('%s %s', 'Hello', 'World!'); $str1 = 'Hello '; $str2 = '{a}World!'; $output4 = str_replace('{a}', $str1, $str2); Does it even matter?

    Read the article

  • How can I avoid garbage collection delays in Java games? (Best Practices)

    - by Brian
    I'm performance tuning interactive games in Java for the Android platform. Once in a while there is a hiccup in drawing and interaction for garbage collection. Usually it's less than one tenth of a second, but sometimes it can be as large as 200ms on very slow devices. I am using the ddms profiler (part of the Android SDK) to search out where my memory allocations come from and excise them from my inner drawing and logic loops. The worst offender had been short loops done like, for(GameObject gob : interactiveObjects) gob.onDraw(canvas); where every single time the loop was executed there was an iterator allocated. I'm using arrays (ArrayList) for my objects now. If I ever want trees or hashes in an inner loop I know that I need to be careful or even reimplement them instead of using the Java Collections framework since I can't afford the extra garbage collection. That may come up when I'm looking at priority queues. I also have trouble where I want to display scores and progress using Canvas.drawText. This is bad, canvas.drawText("Your score is: " + Score.points, x, y, paint); because Strings, char arrays and StringBuffers will be allocated all over to make it work. If you have a few text display items and run the frame 60 times a second that begins to add up and will increase your garbage collection hiccups. I think the best choice here is to keep char[] arrays and decode your int or double manually into it and concatenate strings onto the beginning and end. I'd like to hear if there's something cleaner. I know there must be others out there dealing with this. How do you handle it and what are the pitfalls and best practices you've discovered to run interactively on Java or Android? These gc issues are enough to make me miss manual memory management, but not very much.

    Read the article

  • Union of two or more (hash)maps

    - by javierfp
    I have two Maps that contain the same type of Objects: Map<String, TaskJSO> a = new HashMap<String, TaskJSO>(); Map<String, TaskJSO> b = new HashMap<String, TaskJSO>(); public class TaskJSO { String id; } The map keys are the "id" properties. a.put(taskJSO.getId(), taskJSO); I want to obtain a list with: all values in "Map b" + all values in "Map a" that are not in "Map b". What is the fastest way of doing this operation? Thanks EDIT: The comparaison is done by id. So, two TaskJSOs are considered as equal if they have the same id (equals method is overrided). My intention is to know which is the fastest way of doing this operation from a performance point of view. For instance, is there any difference if I do the "comparaison" in a map (as suggested by Peter): Map<String, TaskJSO> ab = new HashMap<String, TaskJSO>(a); ab.putAll(b); ab.values() or if instead I use a set (as suggested by Nishant): Set s = new Hashset(); s.addAll(a.values()); s.addAll(b.values());

    Read the article

  • Oracle Open World starts on Sunday, Sept 30

    - by Mike Dietrich
    Oracle Open World 2012 starts on Sunday this week - and we are really looking forward to see you in one of our presentations, especially theDatabase Upgrade on SteriodsReal Speed, Real Customers, Real Secretson Monday, Oct 1, 12:15pm in Moscone South 307(just skip the lunch - the boxed food is not healthy at all): Monday, Oct 1, 12:15 PM - 1:15 PM - Moscone South - 307 Database Upgrade on Steroids:Real Speed, Real Customers, Real Secrets Mike Dietrich - Consulting Member Technical Staff, Oracle Georg Winkens - Technical Manager, Amadeus Data Processing Carol Tagliaferri - Senior Development Manager, Oracle  Looking to improve the performance of your database upgrade and learn about other ways to reduce upgrade time? Isn’t everyone? In this session, you will learn directly from Oracle’s Upgrade Development team about what you can do to speed things up. Find out about ways to reduce upgrade downtime such as using a transient logical standby database and/or Oracle GoldenGate, and get other hints and tips. Learn about new features that improve upgrade performance and reduce downtime. Hear Georg Winkens, DB Services technical manager from Amadeus, speak about his upgrade experience, and get real-life performance measurements and advice for a successful upgrade. . And don't forget: we already start on Sunday so if you'd like to learn about the SAP database upgrades at Deutsche Messe: Sunday, Sep 30, 11:15 AM - 12:00 PM - Moscone West - 2001Oracle Database Upgrade to 11g Release 2 with SAP Applications Andreas Ellerhoff - DBA, Deutsche Messe AG Mike Dietrich - Consulting Member Technical Staff, Oracle Jan Klokkers - Sr.Director SAP Development, Oracle Deutsche Messe began to use Oracle6 Database at the end of the 1980s and has been using Oracle Database technology together with SAP applications successfully since 2002. At the end of 2010, it took the first steps of an upgrade to Oracle Database 11g Release 2 (11.2), and since mid-2011, all SAP production systems there run successfully with Oracle Database 11g. This presentation explains why Deutsche Messe uses Oracle Database together with SAP applications, discusses the many reasons for the upgrade to Release 11g, and focuses on the operational top aspects from a DBA perspective. . And unfortunately the Hands-On-Lab is sold out already ... We would like to apologize but we have absolutely ZERO influence on either the number of runs or the number of available seats.  Tuesday, Oct 2, 10:15 AM - 12:45 PM - Marriott Marquis - Salon 12/13 Hands On Lab:Upgrading an Oracle Database Instance, Using Best Practices Roy Swonger - Senior Director, Software Development, Oracle Carol Tagliaferri - Senior Development Manager, Oracle Mike Dietrich - Consulting Member Technical Staff, Oracle Cindy Lim - PMTS, Oracle Carol Palmer - Principal Product Manager, Oracle This hands-on lab gives participants the opportunity to work through a database upgrade from an older release of Oracle Database to the very latest Oracle Database release available. Participants will learn how the improved automation of the upgrade process and the generation of fix-up scripts can quickly help fix database issues prior to upgrading. The lab also uses the new parallel upgrade feature to improve performance of the upgrade, resulting in less downtime. Come get inside information about database upgrades from the Database Upgrade development team. . See you soon

    Read the article

  • Oracle Expands Sun Blade Portfolio for Cloud and Highly Virtualized Environments

    - by Ferhat Hatay
    Oracle announced the expansion of Sun Blade Portfolio for cloud and highly virtualized environments that deliver powerful performance and simplified management as tightly integrated systems.  Along with the SPARC T3-1B blade server, Oracle VM blade cluster reference configuration and Oracle's optimized solution for Oracle WebLogic Suite, Oracle introduced the dual-node Sun Blade X6275 M2 server module with some impressive benchmark results.   Benchmarks on the Sun Blade X6275 M2 server module demonstrate the outstanding performance characteristics critical for running varied commercial applications used in cloud and highly virtualized environments.  These include best-in-class SPEC CPU2006 results with the Intel Xeon processor 5600 series, six Fluent world records and 1.8 times the price-performance of the IBM Power 755 running NAMD, a prominent bio-informatics workload.   Benchmarks for Sun Blade X6275 M2 server module  SPEC CPU2006  The Sun Blade X6275 M2 server module demonstrated best in class SPECint_rate2006 results for all published results using the Intel Xeon processor 5600 series, with a result of 679.  This result is 97% better than the HP BL460c G7 blade, 80% better than the IBM HS22V blade, and 79% better than the Dell M710 blade.  This result demonstrates the density advantage of the new Oracle's server module for space-constrained data centers.     Sun Blade X6275M2 (2 Nodes, Intel Xeon X5670 2.93GHz) - 679 SPECint_rate2006; HP ProLiant BL460c G7 (2.93 GHz, Intel Xeon X5670) - 347 SPECint_rate2006; IBM BladeCenter HS22V (Intel Xeon X5680)  - 377 SPECint_rate2006; Dell PowerEdge M710 (Intel Xeon X5680, 3.33 GHz) - 380 SPECint_rate2006.  SPEC, SPECint, SPECfp reg tm of Standard Performance Evaluation Corporation. Results from www.spec.org as of 11/24/2010 and this report.    For more specifics about these results, please go to see http://blogs.sun.com/BestPerf   Fluent The Sun Fire X6275 M2 server module produced world-record results on each of the six standard cases in the current "FLUENT 12" benchmark test suite at 8-, 12-, 24-, 32-, 64- and 96-core configurations. These results beat the most recent QLogic score with IBM DX 360 M series platforms and QLogic "Truescale" interconnects.  Results on sedan_4m test case on the Sun Blade X6275 M2 server module are 23% better than the HP C7000 system, and 20% better than the IBM DX 360 M2; Dell has not posted a result for this test case.  Results can be found at the FLUENT website.   ANSYS's FLUENT software solves fluid flow problems, and is based on a numerical technique called computational fluid dynamics (CFD), which is used in the automotive, aerospace, and consumer products industries. The FLUENT 12 benchmark test suite consists of seven models that are well suited for multi-node clustered environments and representative of modern engineering CFD clusters. Vendors benchmark their systems with the principal objective of providing comparative performance information for FLUENT software that, among other things, depends on compilers, optimization, interconnect, and the performance characteristics of the hardware.   FLUENT application performance is representative of other commercial applications that require memory and CPU resources to be available in a scalable cluster-ready format.  FLUENT benchmark has six conventional test cases (eddy_417k, turbo_500k, aircraft_2m, sedan_4m, truck_14m, truck_poly_14m) at various core counts.   All information on the FLUENT website (http://www.fluent.com) is Copyrighted1995-2010 by ANSYS Inc. Results as of November 24, 2010. For more specifics about these results, please go to see http://blogs.sun.com/BestPerf   NAMD Results on the Sun Blade X6275 M2 server module running NAMD (a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems) show up to a 1.8X better price/performance than IBM's Power 7-based system.  For space-constrained environments, the ultra-dense Sun Blade X6275 M2 server module provides a 1.7X better price/performance per rack unit than IBM's system.     IBM Power 755 4-way Cluster (16U). Total price for cluster: $324,212. See IBM United States Hardware Announcement 110-008, dated February 9, 2010, pp. 4, 21 and 39-46.  Sun Blade X6275 M2 8-Blade Cluster (10U). Total price for cluster:  $193,939. Price/performance and performance/RU comparisons based on f1ATPase molecule test results. Sun Blade X6275 M2 cluster: $3,568/step/sec, 5.435 step/sec/RU. IBM Power 755 cluster: $6,355/step/sec, 3.189 step/sec/U. See http://www-03.ibm.com/systems/power/hardware/reports/system_perf.html. See http://www.ks.uiuc.edu/Research/namd/performance.html for more information, results as of 11/24/10.   For more specifics about these results, please go to see http://blogs.sun.com/BestPerf   Reverse Time Migration The Reverse Time Migration is heavily used in geophysical imaging and modeling for Oil & Gas Exploration.  The Sun Blade X6275 M2 server module showed up to a 40% performance improvement over the previous generation server module with super-linear scalability to 16 nodes for the 9-Point Stencil used in this Reverse Time Migration computational kernel.  The balanced combination of Oracle's Sun Storage 7410 system with the Sun Blade X6275 M2 server module cluster showed linear scalability for the total application throughput, including the I/O and MPI communication, to produce a final 3-D seismic depth imaged cube for interpretation. The final image write time from the Sun Blade X6275 M2 server module nodes to Oracle's Sun Storage 7410 system achieved 10GbE line speed of 1.25 GBytes/second or better performance. Between subsequent runs, the effects of I/O buffer caching on the Sun Blade X6275 M2 server module nodes and write optimized caching on the Sun Storage 7410 system gave up to 1.8 GBytes/second effective write performance. The performance results and characterization of this Reverse Time Migration benchmark could serve as a useful measure for many other I/O intensive commercial applications. 3D VTI Reverse Time Migration Seismic Depth Imaging, see http://blogs.sun.com/BestPerf/entry/3d_vti_reverse_time_migration for more information, results as of 11/14/2010.                            

    Read the article

  • Oracle on Windows / .NET ??(2010?12?)

    - by Yusuke.Yamamoto
    Oracle Database ? Windows Server / .NET ???????????????????????????????????????? 12?~1???????????????? Oracle on Windows / .NET ???????????????! ???????????????????? ?? ????? Windows Server / .NET ???????? Oracle Database ? Windows Server Oracle Database ? .NET Oracle on Windows / .NET ?????? ????? Windows Server / .NET ???????? Oracle=Linux / UNIX ?? ?Oracle Database ????? Linux / UNIX ?????????????????????? ???????? Windows RDBMS ?????????????????? Oracle on Windows ???No.1???!2,000???????! ????????????????????????????????????????????????? ???????????????? Windows ???????&?????????!? ?????????Windows Server ?????????UNIX ? Oracle Database ?????????????????????????(????????·??????)? ????Windows Server ???(Active Directory, MSCS, VSS, etc)????????????????????? ????????1????!Windows Server 2008??Oracle Database 11g???????? ???.NET ??????????????????????? Oracle Data Provider for .NET ????????Oracle Database ???·?????????????????????? ???????/???1????!.NET + Oracle Database 11g ???????????? Oracle Database ? Windows Server / .NET ?????????????????????????? Oracle on Windows / .NET ????????·Tips??????????????! Oracle Database ? Windows Server Windows ?????? Oracle Database 11g Release 2 ??????|????????????????????! Oracle Database 11g Release 2 ????? ???:??????|??????|???????? OTN Windows Server System Center Windows ? Oracle Database ??? " ?????????????? SQL Server ????? / SQL Server ?????? ???!?SQL Server????????????????(??) SQL Server ?? Oracle Database ????????????? ??? SQL Server ??????????????????????????????????? " ?????????????? Oracle Database ? .NET .NET ?????? Oracle Data Access Components(ODAC) ??????|????????????????????! .NET and Windows Application Development ????? ???:.NET??? OTN .NET Developer Center .NET ? Oracle Database ??????????? " ?????????????? Visual Studio ?? Oracle Database ?????????? " ?????????????? Oracle on Windows / .NET ?????? ???????????????????? ????(Oracle Direct Seminar)????????????????????????????????????????? ??????????? View RSS feed ?????

    Read the article

  • New channels for Exadata 11.2.3.1.1

    - by Rene Kundersma
    With the release of Exadata 11.2.3.1.0 back in April 2012 Oracle has deprecated the minimal pack for the Exadata Database Servers (compute nodes). From that release the Linux Database Server updates will be done using ULN and YUM. For the 11.2.3.1.0 release the ULN exadata_dbserver_11.2.3.1.0_x86_64_base channel was made available and Exadata operators could subscribe their system to it via linux.oracle.com. With the new 11.2.3.1.1 release two additional channels are added: a 'latest' channel (exadata_dbserver_11.2_x86_64_latest) a 'patch' channel (exadata_dbserver_11.2_x86_64_patch) The patch channel has the new or updated packages updated in 11.2.3.1.1 from the base channel. The latest channel has all the packages from 11.2.3.1.0 base and patch channels combined.  From here there are three possible situations a Database Server can be in before it can be updated to 11.2.3.1.1: Database Server is on Exadata release < 11.2.3.1.0 Database Server is patched to 11.2.3.1.0 Database Server is freshly imaged to 11.2.3.1.0 In order to bring a Database Server to 11.2.3.1.1 for all three cases the same approach for updating can be used (using YUM), but there are some minor differences: For Database Servers on a release < 11.2.3.1.0 the following high-level steps need to be performed: Subscribe to el5_x86_64_addons, ol5_x86_64_latest and  exadata_dbserver_11.2_x86_64_latest Create local repository Point Database Server to the local repository* install the update * during this process a one-time action needs to be done (details in the README) For Database Servers patched to 11.2.3.1.0: Subscribe to patch channel  exadata_dbserver_11.2_x86_64_patch Create local repository Point Database Server to the local repository Update the system For Database Servers freshly imaged to 11.2.3.1.0: Subscribe to patch channel  exadata_dbserver_11.2_x86_64_patch Create local  repository Point Database Server to the local repository Update the system The difference between 'situation 2' (Database Server is patched to 11.2.3.1.0) and 'situation 3' (Database Server is freshly imaged to 11.2.3.1.0) is that in situation 2 the existing Exadata-computenode.repo file needs to be edited while in situation 3 this file is not existing  and needs to be created or copied. Another difference is that you will end up with more OFA packages installed in situation 2. This is because none are removed during the updating process.  The YUM update functionality with the new channels is a great enhancements to the Database Server update procedure. As usual, the updates can be done in a rolling fashion so no database service downtime is required.  For detailed and up-to-date instructions always see the patch README's 1466459.1 patch 13998727 888828.1 Rene Kundersma

    Read the article

  • Assertion failure when trying to write (INSERT, UPDATE) to sqlite database on iPhone.

    - by Mark McFarlane
    I have a really frustrating error that I've spent hours looking at and cannot fix. I can get data from my db no problem with this code, but inserting or updating gives me these errors: *** Assertion failure in +[Functions db_insert_answer:question_type:score:], /Volumes/Xcode/Kanji/Classes/Functions.m:129 *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Error inserting: db_insert_answer:question_type:score:' Here is the code I'm using: [Functions db_insert_answer:[[dict_question objectForKey:@"JISDec"] intValue] question_type:@"kanji_meaning" score:arc4random() % 100]; //update EF, Next_question, n here [Functions db_update_EF:[dict_question objectForKey:@"question"] EF:EF]; To call these functions: +(sqlite3_stmt *)db_query:(NSString *)queryText{ sqlite3 *database = [self get_db]; sqlite3_stmt *statement; NSLog(queryText); if (sqlite3_prepare_v2(database, [queryText UTF8String], -1, &statement, nil) == SQLITE_OK) { } else { NSLog(@"HMM, COULDNT RUN QUERY: %s\n", sqlite3_errmsg(database)); } sqlite3_close(database); return statement; } +(void)db_insert_answer:(int)obj_id question_type:(NSString *)question_type score:(int)score{ sqlite3 *database = [self get_db]; sqlite3_stmt *statement; char *errorMsg; char *update = "INSERT INTO Answers (obj_id, question_type, score, date) VALUES (?, ?, ?, DATE())"; if (sqlite3_prepare_v2(database, update, -1, &statement, nil) == SQLITE_OK) { sqlite3_bind_int(statement, 1, obj_id); sqlite3_bind_text(statement, 2, [question_type UTF8String], -1, NULL); sqlite3_bind_int(statement, 3, score); } if (sqlite3_step(statement) != SQLITE_DONE){ NSAssert1(0, @"Error inserting: %s", errorMsg); } sqlite3_finalize(statement); sqlite3_close(database); NSLog(@"Answer saved"); } +(void)db_update_EF:(NSString *)kanji EF:(int)EF{ sqlite3 *database = [self get_db]; sqlite3_stmt *statement; //NSLog(queryText); char *errorMsg; char *update = "UPDATE Kanji SET EF = ? WHERE Kanji = '?'"; if (sqlite3_prepare_v2(database, update, -1, &statement, nil) == SQLITE_OK) { sqlite3_bind_int(statement, 1, EF); sqlite3_bind_text(statement, 2, [kanji UTF8String], -1, NULL); } else { NSLog(@"HMM, COULDNT RUN QUERY: %s\n", sqlite3_errmsg(database)); } if (sqlite3_step(statement) != SQLITE_DONE){ NSAssert1(0, @"Error updating: %s", errorMsg); } sqlite3_finalize(statement); sqlite3_close(database); NSLog(@"Update saved"); } +(sqlite3 *)get_db{ sqlite3 *database; NSFileManager *fileManager = [NSFileManager defaultManager]; NSString *copyFrom = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"/kanji_training.sqlite"]; if([fileManager fileExistsAtPath:[self dataFilePath]]) { //NSLog(@"DB FILE ALREADY EXISTS"); } else { [fileManager copyItemAtPath:copyFrom toPath:[self dataFilePath] error:nil]; NSLog(@"COPIED DB TO DOCUMENTS BECAUSE IT DIDNT EXIST: NEW INSTALL"); } if (sqlite3_open([[self dataFilePath] UTF8String], &database) != SQLITE_OK) { sqlite3_close(database); NSAssert(0, @"Failed to open database"); NSLog(@"FAILED TO OPEN DB"); } else { if([fileManager fileExistsAtPath:[self dataFilePath]]) { //NSLog(@"DB PATH:"); //NSLog([self dataFilePath]); } } return database; } + (NSString *)dataFilePath { NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; return [documentsDirectory stringByAppendingPathComponent:@"kanji_training.sqlite"]; } I really can't work it out! Can anyone help me? Many thanks.

    Read the article

  • Sensible Way to Pass Web Data in XML to a SQL Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Sensible Way to Pass Web Data to Sql Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • SQL Authority News – Play by Play with Pinal Dave – A Birthday Gift

    - by Pinal Dave
    Today is my birthday. Personal Note When I was young, I was always looking forward to my birthday as on this day, I used to get gifts from everybody. Now when I am getting old on each of my birthday, I have almost same feeling but the direction is different. Now on each of my birthday, I feel like giving gifts to everybody. I have received lots of support, love and respect from everybody; and now I must return it back.Well, on this birthday, I have very unique gifts for everybody – my latest course on SQL Server. How I Tune Performance I often get questions where I am asked how do I work on a normal day. I am often asked that how do I work when I have performance tuning project is assigned to me. Lots of people have expressed their desire that they want me to explain and demonstrate my own method of solving performance problem when I am facing real world problem. It is a pretty difficult task as in the real world, nothing goes as planned and usually planned demonstrations have no place there. The real world, demands real solutions and in a timely fashion. If a consultant goes to industry and does not demonstrate his/her capabilities in very first few minutes, it does not matter how much fame he/she is, the door is shown to them eventually. It is true and in my early career, I have faced it quite commonly. I have learned the trick to be honest from the start and request absolutely transparent communication from the organization where I am to consult. Play by Play Play by Play is a very unique setup. It is not planned and it is a step by step course. It is like a reality show – a very real encounter to the problem and real problem solving approach. I had a great time doing this course. Geoffrey Grosenbach (VP of Pluralsight) sits down with me to see what a SQL Server Admin does in the real world. This Play-by-Play focuses on SQL Server performance tuning and I go over optimizing queries and fine-tuning the server. The table of content of this course is very simple. Introduction In the introduction I explained my basic strategies when I am approached by a customer for performance tuning. Basic Information Gathering In this module I explain how I do gather various information for performance tuning project. It is very crucial to demonstrate to customers for consultant his capability of solving problem. I attempt to resolve a small problem which gives a big positive impact on performance, consultant have to gather proper information from the start. I demonstrate in this module, how one can collect all the important performance tuning metrics. Removing Performance Bottleneck In this module, I build upon the previous module’s statistics collected. I analysis various performance tuning measures and immediately start implementing various tweaks on the performance, which will start improving the performance of my server. This is a very effective method and it gives immediate return of efforts. Index Optimization Indexes are considered as a silver bullet for performance tuning. However, it is not true always there are plenty of examples where indexes even performs worst after implemented. The key is to understand a few of the basic properties of the index and implement the right things at the right time. In this module, I describe in detail how to do index optimizations and what are right and wrong with Index. If you are a DBA or developer, and if your application is running slow – this is must attend module for you. I have some really interesting stories to tell as well. Optimize Query with Rewrite Every problem has more than one solution, in this module we will see another very famous, but hard to master skills for performance tuning – Query Rewrite. There are few do’s and don’ts for any query rewrites. I take a very simple example and demonstrate how query rewrite can improve the performance of the query at many folds. I also share some real world funny stories in this module. This course is hosted at Pluralsight. You will need a valid login for Pluralsight to watch  Play by Play: Pinal Dave course. You can also sign up for FREE Trial of Pluralsight to watch this course. As today is my birthday – I will give 10 people (randomly) who will express their desire to learn this course, a free code. Please leave your comment and I will send you free code to watch this course for free. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Video

    Read the article

  • Unix ? Linux ????????? Oracle Database 11g Release 2 ? SAP ????????

    - by ?? ?
    US?Blog Oracle Database 11g Release 2 is SAP certified for Unix and Linux platforms. ?????????SAP??????Oracle Database 11g R2????????? ????UNIX???Linux???????????????? Linux x86???x86-64 AIX HP-UX IA64 Solaris SPARC???x64 ??? ?????????????????????????! Advanced Compression Option (table, RMAN backup, expdp, DG Network) Real Application Testing Oracle Database 11g Release 2 Database Vault Oracle Database 11g Release 2 RAC Advanced Encryption for tablespaces, RMAN backups, expdp, DG Network Direct NFS Deferred Segments Online Patching ????SAP???1398634 ??????????????????

    Read the article

  • C# .Net 3.5 Asynchronous Socket Server Performance Problem

    - by iBrAaAa
    I'm developing an Asynchronous Game Server using .Net Socket Asynchronous Model( BeginAccept/EndAccept...etc.) The problem I'm facing is described like that: When I have only one client connected, the server response time is very fast but once a second client connects, the server response time increases too much. I've measured the time from a client sends a message to the server until it gets the reply in both cases. I found that the average time in case of one client is about 17ms and in case of 2 clients about 280ms!!! What I really see is that: When 2 clients are connected and only one of them is moving(i.e. requesting service from the server) it is equivalently equal to the case when only one client is connected(i.e. fast response). However, when the 2 clients move at the same time(i.e. requests service from the server at the same time) their motion becomes very slow (as if the server replies each one of them in order i.e. not simultaneously). Basically, what I am doing is that: When a client requests a permission for motion from the server and the server grants him the request, the server then broadcasts the new position of the client to all the players. So if two clients are moving in the same time, the server is eventually trying to broadcast to both clients the new position of each of them at the same time. EX: Client1 asks to go to position (2,2) Client2 asks to go to position (5,5) Server sends to each of Client1 & Client2 the same two messages: message1: "Client1 at (2,2)" message2: "Client2 at (5,5)" I believe that the problem comes from the fact that Socket class is thread safe according MSDN documentation http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.aspx. (NOT SURE THAT IT IS THE PROBLEM) Below is the code for the server: /// /// This class is responsible for handling packet receiving and sending /// public class NetworkManager { /// /// An integer to hold the server port number to be used for the connections. Its default value is 5000. /// private readonly int port = 5000; /// /// hashtable contain all the clients connected to the server. /// key: player Id /// value: socket /// private readonly Hashtable connectedClients = new Hashtable(); /// /// An event to hold the thread to wait for a new client /// private readonly ManualResetEvent resetEvent = new ManualResetEvent(false); /// /// keeps track of the number of the connected clients /// private int clientCount; /// /// The socket of the server at which the clients connect /// private readonly Socket mainSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); /// /// The socket exception that informs that a client is disconnected /// private const int ClientDisconnectedErrorCode = 10054; /// /// The only instance of this class. /// private static readonly NetworkManager networkManagerInstance = new NetworkManager(); /// /// A delegate for the new client connected event. /// /// the sender object /// the event args public delegate void NewClientConnected(Object sender, SystemEventArgs e); /// /// A delegate for the position update message reception. /// /// the sender object /// the event args public delegate void PositionUpdateMessageRecieved(Object sender, PositionUpdateEventArgs e); /// /// The event which fires when a client sends a position message /// public PositionUpdateMessageRecieved PositionUpdateMessageEvent { get; set; } /// /// keeps track of the number of the connected clients /// public int ClientCount { get { return clientCount; } } /// /// A getter for this class instance. /// /// only instance. public static NetworkManager NetworkManagerInstance { get { return networkManagerInstance; } } private NetworkManager() {} /// Starts the game server and holds this thread alive /// public void StartServer() { //Bind the mainSocket to the server IP address and port mainSocket.Bind(new IPEndPoint(IPAddress.Any, port)); //The server starts to listen on the binded socket with max connection queue //1024 mainSocket.Listen(1024); //Start accepting clients asynchronously mainSocket.BeginAccept(OnClientConnected, null); //Wait until there is a client wants to connect resetEvent.WaitOne(); } /// /// Receives connections of new clients and fire the NewClientConnected event /// private void OnClientConnected(IAsyncResult asyncResult) { Interlocked.Increment(ref clientCount); ClientInfo newClient = new ClientInfo { WorkerSocket = mainSocket.EndAccept(asyncResult), PlayerId = clientCount }; //Add the new client to the hashtable and increment the number of clients connectedClients.Add(newClient.PlayerId, newClient); //fire the new client event informing that a new client is connected to the server if (NewClientEvent != null) { NewClientEvent(this, System.EventArgs.Empty); } newClient.WorkerSocket.BeginReceive(newClient.Buffer, 0, BasePacket.GetMaxPacketSize(), SocketFlags.None, new AsyncCallback(WaitForData), newClient); //Start accepting clients asynchronously again mainSocket.BeginAccept(OnClientConnected, null); } /// Waits for the upcoming messages from different clients and fires the proper event according to the packet type. /// /// private void WaitForData(IAsyncResult asyncResult) { ClientInfo sendingClient = null; try { //Take the client information from the asynchronous result resulting from the BeginReceive sendingClient = asyncResult.AsyncState as ClientInfo; // If client is disconnected, then throw a socket exception // with the correct error code. if (!IsConnected(sendingClient.WorkerSocket)) { throw new SocketException(ClientDisconnectedErrorCode); } //End the pending receive request sendingClient.WorkerSocket.EndReceive(asyncResult); //Fire the appropriate event FireMessageTypeEvent(sendingClient.ConvertBytesToPacket() as BasePacket); // Begin receiving data from this client sendingClient.WorkerSocket.BeginReceive(sendingClient.Buffer, 0, BasePacket.GetMaxPacketSize(), SocketFlags.None, new AsyncCallback(WaitForData), sendingClient); } catch (SocketException e) { if (e.ErrorCode == ClientDisconnectedErrorCode) { // Close the socket. if (sendingClient.WorkerSocket != null) { sendingClient.WorkerSocket.Close(); sendingClient.WorkerSocket = null; } // Remove it from the hash table. connectedClients.Remove(sendingClient.PlayerId); if (ClientDisconnectedEvent != null) { ClientDisconnectedEvent(this, new ClientDisconnectedEventArgs(sendingClient.PlayerId)); } } } catch (Exception e) { // Begin receiving data from this client sendingClient.WorkerSocket.BeginReceive(sendingClient.Buffer, 0, BasePacket.GetMaxPacketSize(), SocketFlags.None, new AsyncCallback(WaitForData), sendingClient); } } /// /// Broadcasts the input message to all the connected clients /// /// public void BroadcastMessage(BasePacket message) { byte[] bytes = message.ConvertToBytes(); foreach (ClientInfo client in connectedClients.Values) { client.WorkerSocket.BeginSend(bytes, 0, bytes.Length, SocketFlags.None, SendAsync, client); } } /// /// Sends the input message to the client specified by his ID. /// /// /// The message to be sent. /// The id of the client to receive the message. public void SendToClient(BasePacket message, int id) { byte[] bytes = message.ConvertToBytes(); (connectedClients[id] as ClientInfo).WorkerSocket.BeginSend(bytes, 0, bytes.Length, SocketFlags.None, SendAsync, connectedClients[id]); } private void SendAsync(IAsyncResult asyncResult) { ClientInfo currentClient = (ClientInfo)asyncResult.AsyncState; currentClient.WorkerSocket.EndSend(asyncResult); } /// Fires the event depending on the type of received packet /// /// The received packet. void FireMessageTypeEvent(BasePacket packet) { switch (packet.MessageType) { case MessageType.PositionUpdateMessage: if (PositionUpdateMessageEvent != null) { PositionUpdateMessageEvent(this, new PositionUpdateEventArgs(packet as PositionUpdatePacket)); } break; } } } The events fired are handled in a different class, here are the event handling code for the PositionUpdateMessage (Other handlers are irrelevant): private readonly Hashtable onlinePlayers = new Hashtable(); /// /// Constructor that creates a new instance of the GameController class. /// private GameController() { //Start the server server = new Thread(networkManager.StartServer); server.Start(); //Create an event handler for the NewClientEvent of networkManager networkManager.PositionUpdateMessageEvent += OnPositionUpdateMessageReceived; } /// /// this event handler is called when a client asks for movement. /// private void OnPositionUpdateMessageReceived(object sender, PositionUpdateEventArgs e) { Point currentLocation = ((PlayerData)onlinePlayers[e.PositionUpdatePacket.PlayerId]).Position; Point locationRequested = e.PositionUpdatePacket.Position; ((PlayerData)onlinePlayers[e.PositionUpdatePacket.PlayerId]).Position = locationRequested; // Broadcast the new position networkManager.BroadcastMessage(new PositionUpdatePacket { Position = locationRequested, PlayerId = e.PositionUpdatePacket.PlayerId }); }

    Read the article

< Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >