Search Results

Search found 28685 results on 1148 pages for 'query performance'.

Page 157/1148 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Need a mysql query to extract data from mm-forms

    - by Kirill
    http://cl.ly/201a11b430bf60ef9e97 is what the rows look like, after I clean them up a bit: SELECT fk_form_joiner_id,form_key,value FROM wp_contactform_submit_data WHERE form_key!='page_post_id' AND form_key!='captcha-802' AND form_key!='page_post_title' AND form_key!='user_ID' What I need is a query where it would transform this into a structure like: id | First-name | Surname-name | your-email | dob | address-1 | address-2 | county | postcode | phone ------------------------------------------------------------------------------------------------------- 88 | jack | Somethin | [email protected] | dob | (etc) I think I know a way to do this through php, but I think there must be a way to do it as a mysql query. Ultimately, I need to export this (into a csv), but if you could give me a query that would output this into a mysql table (or at least a workable select query, that I could then turn into an insert query loop, to create a table), I'd do it form there. Thank you.

    Read the article

  • Prevent query string manipulation by adding a hash?

    - by saille
    To protect a web application from query string manipulation, I was considering adding a query string parameter to every url which stores a SHA1 hash of all the other query string parameters & values, then validating against the hash on every request. Does this method provide strong protection against user manipulation of query string values? Are there any other downsides/side-effects to doing this? I am not particularly concerned about the 'ugly' urls for this private web application. Url's will still be 'bookmarkable' as the hash will always be the same for the same query string arguments. This is an ASP.NET application.

    Read the article

  • Dynamic Linq query issue

    - by Alex
    Hey All, Im trying to query the Netflix OData feed. I have the following query that works fine in LinqPad: from g in Genres from t in g.Titles where g.Name == "Horror" && t.AverageRating == 2 && t.ReleaseYear == 2004 select t But, when I move it over to my Silverlight app, the user selects what to search on so I may or may not have all of the params. That being the case, I need to construct the query at runtime. Ive looked the the Dynamic Query stuff and that will do fine...the issue that I have is that I need an initial acceptable query to append to and this doesn't fly: from g in Genres from t in g.Titles select t; Any additional thoughts would be appreciated. Thanks in advance

    Read the article

  • adding DATE_SUB to query to return range of values in mysql

    - by ian
    Here is my original query: $query = mysql_query("SELECT s.*, UNIX_TIMESTAMP(`date`) AS `date`, f.userid as favoritehash FROM songs s LEFT JOIN favorites f ON f.favorite = s.id AND f.userid = '$userhash' ORDER BY s.date DESC"); This returns all the songs in my DB and then joins data from my favorites table so I can display wich items a return visitors has clicked as favorites or not. Visitors are recognized by a unique has storred in a cookie and in the favorites table. I need to alter this query so that I can get just the last months worth of songs. Below is my attempt at adding DATE_SUB to my query: $query = mysql_query("SELECT s.*, UNIX_TIMESTAMP(`date`) AS `date`, f.userid as favoritehash FROM songs s WHERE `date` >= DATE_SUB( NOW( ) , INTERVAL 1 MONTH ) LEFT JOIN favorites f ON f.favorite = s.id AND f.userid = '$userhash' ORDER BY s.date DESC"); Suggestions?

    Read the article

  • NHibernate Named Query Parameter Numbering @p1 @p2 etc

    - by IanT8
    A colleague recently ran into a problem where he was passing in a single date parameter to a named query. In the query, the parameter was being used twice, once in an expression and once in a GROUP BY clause. Much to our surprise, we discovered that NHibernate used two variables and sent the single named parameter in twice, as @p1 and @p2. This behaviour caused SQL to fail the query, with the usual "a column in the select clause is not in the group by clause" (I paraphrase ofcourse). Is this behaviour normal? Can it be changed? Seems to me that if you have a parameter name like :startDate, NHibernate only needs to pass in @p1 no matter how many times you might refer to :startDate in the query. Any comments? The problem was worked around by using another sub-query to overcome the SQL parsing error.

    Read the article

  • linq to sql report tables in query

    - by luke
    Here's the method i want to write: public static IEnumerable<String> GetTableNames(this IQueryable<T> query) { //... } where the IQueryable is a linq-to-sql query (is there a more specific interface i should use?). then if i had a query like this var q = from c in db.Customers from p in db.Products where c.ID = 3 select new {p.Name, p.Version}; q.GetTableNames();// return ["Customers", "Products"] basically it would show all the tables that this query touches in the db, it is ok to execute the query to figure this out too (since that is going to happen anyway)? any ideas?

    Read the article

  • SQL Query - 20mil records - Best practice to return information

    - by eqiz
    I have a SQL database that has the following table: Table: PhoneRecords ID(identity Seed) FirstName LastName PhoneNumber ZipCode Very simple straight forward table. This table has over 20million records. I am looking for the best way to do queries that pull out records based off area codes from the table. For instance here is an example query that I have done. SELECT phonenumber, firstname FROM [PhoneRecords] WHERE (phone LIKE '2012042%') OR (phone LIKE '2012046%') OR (phone LIKE '2012047%') OR (phone LIKE '2012083%') OR (phone LIKE '2012088%') OR (phone LIKE '2012841%') As you can see this is an ugly query, but it would get the job done (if I wasn't running into timeout issues) Can anyone tell me the best way for speed/optimization to do the above query to display the results? Currently that query above takes around 2 hours to complete on a 9gb 1600mhz ram, i7 930 quadcore OC'd 4.01ghz. I obviously have the computer power required to do such a query, but still takes too long for queries.

    Read the article

  • Access is re-writing - and breaking - my query!

    - by FrustratedWithFormsDesigner
    I have a query in MS Access (2003) that makes use of a subquery. The subquery part looks like this: ...FROM (SELECT id, dt, details FROM all_recs WHERE def_cd="ABC-00123") AS q1,... And when I switch to Table View to verify the results, all is OK. Then, I wanted the result of this query to be printed on the page header for a report (the query returns a single row that is page-header stuff). I get an error because the query is suddenly re-written as: ...FROM [SELECT id, dt, details FROM all_recs WHERE def_cd="ABC-00123"; ] AS q1,... So it's Ok that the round brackets are automatically replaced by square brackets, Access feels it needs to do that, fine! But why is it adding the ; into the subquery, which causes it to fail? I suppose I could just create new query objects for these subqueries, but it seems a little silly that I should have to do that.

    Read the article

  • if specific row = null, execute query

    - by fusion
    how do i check if the value of a column is null, and only then execute the query? for example: col1 col2 col3 01 abc i run a query which first checks if the record exists or not; if it exists, it should execute the update query and if it doesn't exist, it executes the insert query. how do i check if col3 is null and if it is null, it should execute the update query. . $sql = "SELECT uid FROM `users` WHERE uid = '" . $user_id . "'"; $result = mysql_query($sql,$conn) or die('Error:' .mysql_error()); $totalrows = mysql_num_rows($result); if($totalrows < 1) { insertUser($user_id,$sk, $conn); } else { updateSessionKey($user_id,$sk,$conn); }

    Read the article

  • Passing report values to a query

    - by Beavis
    I'm a novice with Microsoft Access as my background is mostly .NET. I'm sure what I'm trying to accomplish is dead simple but I need some direction. I have a report and a query. The query returns a single numeric value based on a single numeric criteria. Select total from table where id = [topic] I have placed a text box on my report so I can feed the id to this query and in return get the total. It seems like DLookUp is what I want but no matter how I construct it, I get an "#Error" in the text box when I run the report. Currently my DLookUp looks like this (I just hard-coded now for simplicity): =DLookUp("[total]","myquery","[topic] = 3") How can I pass a value from a field on my report to a query so I can return the query's single numeric value? Thanks.

    Read the article

  • Timeout Expire on INSERT Query

    - by Sachin Gaur
    I am trying to insert a row in a table using simple INSERT Query in a transaction. It works fine in SQL Server but I am not able to insert the data using my business object. I am calling a SELECT query using the Command as: Using cm As New SqlCommand With cm .Connection = tr.Connection .Transaction = tr .CommandType = CommandType.Text .CommandText = Some Select Query .ExecuteScalar() '' Do something .CommandText = Insert Query .ExecuteNonQuery() End With End Using I am getting the Timeout period expired error at ".ExecuteNonQuery()" line. Any other DML query is running perfectly fine at this point. Can anyone tell me the reason?

    Read the article

  • Connection Reset on MySQL query

    - by sunwukung
    OK, I'm flummoxed.(i've asked this question over on Stack too - but I need to get it fixed so I'm asking here too - any help is GREATLY appreciated) I'm trying to execute a query on a database (locally) and I keep getting a connection reset error. I've been using the method below in a generic DAO class to build a query string and pass to Zend_Db API. public function insert($params) { $loop = false; $keys = $values = ''; foreach($params as $k => $v){ if($loop == true){ $keys .= ','; $values .= ','; } $keys .= $this->db->quoteIdentifier($k); $values .= $this->db->quote($v); $loop = true; } $sql = "INSERT INTO " . $this->table_name . " ($keys) VALUES ($values)"; //formatResult returns an array of info regarding the status and any result sets of the query //I've commented that method call out anyway, so I don't think it's that try { $this->db->query($sql); return $this->formatResult(array( true, 'New record inserted into: '.$this->table_name )); }catch(PDOException $e) { return $this->formatResult($e); } } So far, this has worked fine - the errors have been occurring since we generated new tables to record user input. The insert string looks like this: INSERT INTO tablename(`id`,`title`,`summary`,`description`,`keywords`,`type_id`,`categories`) VALUES ('5539','Sample Title','Sample content',' \'Lorem ipsum dolor sit amet, consectetur adipiscing elit. In et pellentesque mauris. Curabitur hendrerit, leo id ultrices pellentesque, est purus mattis ligula, vitae imperdiet neque ligula bibendum sapien. Curabitur aliquet nisi et odio pharetra tincidunt. Phasellus sed iaculis nisl. Fusce commodo mauris et purus vehicula dictum. Nulla feugiat molestie accumsan. Donec fermentum libero in risus tempus elementum aliquam et magna. Fusce vitae sem metus. Aenean commodo pharetra risus, nec pellentesque augue ullamcorper nec. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Nullam vel elit libero. Vestibulum in turpis nunc.\'','this,is,a,sample,array',1,'category title') Here are the parameters it's getting before assembling the query (var_dump): array 'id' => string '1' (length=4) 'title' => string 'Sample Title' (length=12) 'summary' => string 'Sample content' (length=14) 'description' => string '<p>'Lorem ipsum dolor sit amet, consectetur adipiscing elit. In et pellentesque mauris. Curabitur hendrerit, leo id ultrices pellentesque, est purus mattis ligula, vitae imperdiet neque ligula bibendum sapien. Curabitur aliquet nisi et odio pharetra tincidunt. Phasellus sed iaculis nisl. Fusce commodo mauris et purus vehicula dictum. Nulla feugiat molestie accumsan. Donec fermentum libero in risus tempus elementum aliquam et magna. Fusce vitae sem metus. Aenean commodo pharetra risus, nec pellentesque augue'... (length=677) 'keywords' => string 'this,is,a,sample,array' (length=22) 'type_id' => int 1 'categories' => string 'category title' (length=43) The next port of call was checking the limits on the table, since it seems to insert if the length of "description" is around the 300 mark (it varies between 310 - 330). The field limit is set to VARCHAR(1500) and the validation on this field won't allow anything past bigger than 1200 with HTML, 800 without. The real kicker is that if I take this sql string and execute it via the command line, it works fine - so I can't for the life of me figure out what's wrong. I've tried extending the server parameters i.e. http://stackoverflow.com/questions/1964554/unexpected-connection-reset-a-php-or-an-apache-issue So, in a nutshell, I'm stumped. Any ideas?

    Read the article

  • CPU/JVM/JBoss 7 slows down over time

    - by lukas
    I'm experiencing performance slow down on JBoss 7.1.1 Final. I wrote simple program that demostrates this behavior. I generate an array of 100,000 of random integers and run bubble sort on it. @Model public class PerformanceTest { public void proceed() { long now = System.currentTimeMillis(); int[] arr = new int[100000]; for(int i = 0; i < arr.length; i++) { arr[i] = (int) (Math.random() * 200000); } long now2 = System.currentTimeMillis(); System.out.println((now2 - now) + "ms took to generate array"); now = System.currentTimeMillis(); bubbleSort(arr); now2 = System.currentTimeMillis(); System.out.println((now2 - now) + "ms took to bubblesort array"); } public void bubbleSort(int[] arr) { boolean swapped = true; int j = 0; int tmp; while (swapped) { swapped = false; j++; for (int i = 0; i < arr.length - j; i++) { if (arr[i] > arr[i + 1]) { tmp = arr[i]; arr[i] = arr[i + 1]; arr[i + 1] = tmp; swapped = true; } } } } } Just after I start the server, it takes approximately 22 seconds to run this code. After few days of JBoss 7.1.1. running, it takes 330 sec to run this code. In both cases, I launch the code when the CPU utilization is very low (say, 1%). Any ideas why? I run the server with following arguments: -Xms1280m -Xmx2048m -XX:MaxPermSize=2048m -Djava.net.preferIPv4Stack=true -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -Duser.timezone=UTC -Djboss.server.default.config=standalone-full.xml -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n I'm running it on Linux 2.6.32-279.11.1.el6.x86_64 with java version "1.7.0_07". It's within J2EE applicaiton. I use CDI so I have a button on JSF page that will call method "proceed" on @RequestScoped component PerformanceTest. I deploy this as separate war file and even if I undeploy other applications, it doesn't change the performance. It's a virtual machine that is sharing CPUs with another machine but that one doesn't consume anything. Here's yet another observation: when the server is after fresh start and I run the bubble sort, It utilizes 100% of one processor core. It never switches to another core or drops utilization below 95%. However after some time the server is running and I'm experiencing the performance problems, the method above is utilizing CPU core usually 100%, however I just found out from htop that this task is being switched very often to other cores. That is, at the beginning it's running on core #1, after say 2 seconds it's running on #5 then after say 2 seconds #8 etc. Furthermore, the utilization is not kept at 100% at the core but sometimes drops to 80% or even lower. For the server after fresh start, even though If I simulate a load, it never switches the task to another core.

    Read the article

  • What a Performance! MySQL 5.5 and InnoDB 1.1 running on Oracle Linux

    - by zeynep.koch(at)oracle.com
    The MySQL performance team in Oracle has recently completed a series of benchmarks comparing Read / Write and Read-Only performance of MySQL 5.5 with the InnoDB and MyISAM storage engines. Compared to MyISAM, InnoDB delivered 35x higher throughput on the Read / Write test and 5x higher throughput on the Read-Only test, with 90% scalability across 36 CPU cores. A full analysis of results and MySQL configuration parameters are documented in a new whitepaperIn addition to the benchmark, the new whitepaper, also includes:- A discussion of the use-cases for each storage engine- Best practices for users considering the migration of existing applications from MyISAM to InnoDB- A summary of the performance and scalability enhancements introduced with MySQL 5.5 and InnoDB 1.1.The benchmark itself was based on Sysbench, running on AMD Opteron "Magny-Cours" processors, and Oracle Linux with the Unbreakable Enterprise Kernel You can learn more about MySQL 5.5 and InnoDB 1.1 from here and download it from here to test whether you witness performance gains in your real-world applications.  By Mat Keep

    Read the article

  • Is JSF really ready to deliver high performance web applications?

    - by aklin81
    I have heard a lot of good about JSF but as far as I know people also had lots of serious complains with this technology in the past, not aware of how much the situation has improved. We are considering JSF as a probable technology for a social network project. But we are not aware of the performance scores of JSF neither we could really come across any existing high performance website that had been using JSF. People complain about its performance scalability issues. We are still not very sure if we are doing the right thing by choosing jsf, and thus would like to hear from you all about this and take your inputs into consideration. Is it possible to configure JSF to satisfy the high performance needs of social networking service ? Also till what extent is it possible to survive with the current problems in JSF.

    Read the article

  • LINQ Query using Multiple From and Multiple Collections

    1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5:  6: namespace ConsoleApplication2 7: { 8: class Program 9: { 10: static void Main(string[] args) 11: { 12: var emps = GetEmployees(); 13: var deps = GetDepartments(); 14:  15: var results = from e in emps 16: from d in deps 17: where e.EmpNo >= 1 && d.DeptNo <= 30 18: select new { Emp = e, Dept = d }; 19: 20: foreach (var item in results) 21: { 22: Console.WriteLine("{0},{1},{2},{3}", item.Dept.DeptNo, item.Dept.DName, item.Emp.EmpNo, item.Emp.EmpName); 23: } 24: } 25:  26: private static List<Emp> GetEmployees() 27: { 28: return new List<Emp>() { 29: new Emp() { EmpNo = 1, EmpName = "Smith", DeptNo = 10 }, 30: new Emp() { EmpNo = 2, EmpName = "Narayan", DeptNo = 20 }, 31: new Emp() { EmpNo = 3, EmpName = "Rishi", DeptNo = 30 }, 32: new Emp() { EmpNo = 4, EmpName = "Guru", DeptNo = 10 }, 33: new Emp() { EmpNo = 5, EmpName = "Priya", DeptNo = 20 }, 34: new Emp() { EmpNo = 6, EmpName = "Riya", DeptNo = 10 } 35: }; 36: } 37:  38: private static List<Department> GetDepartments() 39: { 40: return new List<Department>() { 41: new Department() { DeptNo=10, DName="Accounts" }, 42: new Department() { DeptNo=20, DName="Finance" }, 43: new Department() { DeptNo=30, DName="Travel" } 44: }; 45: } 46: } 47:  48: class Emp 49: { 50: public int EmpNo { get; set; } 51: public string EmpName { get; set; } 52: public int DeptNo { get; set; } 53: } 54:  55: class Department 56: { 57: public int DeptNo { get; set; } 58: public String DName { get; set; } 59: } 60: } span.fullpost {display:none;}

    Read the article

  • Romanian parter Omnilogic Delivers “No Limits” Scalability, Performance, Security, and Affordability through Next-Generation, Enterprise-Grade Engineered Systems

    - by swalker
    Omnilogic SRL is a leading technology and information systems provider in Romania and central and Eastern Europe. An Oracle Value-Added Distributor Partner, Omnilogic resells Oracle software, hardware, and engineered systems to Oracle Partner Network members and provides specialized training, support, and testing facilities. Independent software vendors (ISVs) also use Omnilogic’s demonstration and testing facilities to upgrade the performance and efficiency of their solutions and those of their customers by migrating them from competitor technologies to Oracle platforms. Omnilogic also has a dedicated offering for ISV solutions, based on Oracle technology in a hosting service provider model. Omnilogic wanted to help Oracle Partners and ISVs migrate solutions to Oracle Exadata and sell Oracle Exadata to end-customers. It installed Oracle Exadata Database Machine X2-2 Quarter Rack at its data center to create a demonstration and testing environment. Demonstrations proved that Oracle Exadata achieved processing speeds up to 100 times faster than competitor systems, cut typical back-up times from 6 hours to 20 minutes, and stored 10 times more data. Oracle Partners and ISVs learned that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, without business disruption, and with reduced ongoing operating costs. Challenges A word from Omnilogic “Oracle Exadata is the new killer application—the smartest solution on the market. There is no competition.” – Sorin Dragomir, Chief Operating Officer, Omnilogic SRL Enable Oracle Partners in Romania and central and eastern Europe to achieve Oracle Exadata Ready status by providing facilities to test and optimize existing applications and build real-life proofs of concept (POCs) for new solutions on Oracle Exadata Database Machine Provide technical support and demonstration facilities for ISVs migrating their customers’ solutions from competitor technologies to Oracle Exadata to maximize performance, scalability, and security; optimize hardware and datacenter space; cut maintenance costs; and improve return on investment Demonstrate power of Oracle Exadata’s high-performance, high-capacity engineered systems for customer-facing businesses, such as government organizations, telecommunications, banking and insurance, and utility companies, which typically require continuous availability to support very large data volumes Showcase Oracle Exadata’s unchallenged online transaction processing (OLTP) capabilities that cut application run times to provide unrivalled query turnaround and user response speeds while significantly reducing back-up times and eliminating risk of unplanned outages Capitalize on providing a world-class training and demonstration environment for Oracle Exadata to accelerate sales with Oracle Partners Solutions Created a testing environment to enable Oracle Partners and ISVs to test their own solutions and those of their customers on Oracle Exadata running on Oracle Enterprise Linux or Oracle Solaris Express to benchmark performance prior to migration Leveraged expertise on Oracle Exadata to offer Oracle Exadata training, migration, support seminars and to showcase live demonstrations for Oracle Partners Proved how Oracle Exadata’s pre-engineered systems, that come assembled, configured, and ready to run, reduce deployment time and cost, minimize risk, and help customers achieve the full performance potential immediately after go live Increased processing speeds 10-fold and with zero data loss for a telecommunications provider’s client-facing customer relationship management solution Achieved performance improvements of between 6 and 100 times faster for financial and utility company applications currently running on IBM, Microsoft, or SAP HANA platforms Showed how daily closure procedures carried out overnight by banks, insurance companies, and other financial institutions to analyze each day’s business, can typically be cut from around six hours to 20 minutes, some 18 times faster, when running on Oracle Exadata Simulated concurrent back-ups while running applications under normal working conditions to prove that Oracle Exadata-based solutions can be backed up during business hours without causing bottlenecks or impacting the end-user experience Demonstrated that Oracle Exadata’s built-in analytics, data mining and OLTP capabilities make it the highest-performance, lowest-cost choice for large data warehousing operations Showed how Oracle Exadata’s columnar compression and intelligent storage architecture allows 10 times more data to be stored than on competitor platforms Demonstrated how Oracle Exadata cuts hardware requirements significantly by consolidating workloads on to fewer servers which delivers greater power efficiency and lower operating costs that competing systems from IBM and other manufacturers Proved to ISVs that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, and with minimal business disruption Demonstrated how storage servers, database servers, and network switches can be added incrementally and inexpensively to the Oracle Exadata platform to support business expansion On track to grow revenues by 10% in year one and by 15% annually thereafter through increased business generated from Oracle Partners and ISVs

    Read the article

  • UnboundLocalError: local variable 'rows' referenced before assignment

    - by patrick
    i'm trying to make a database connection by an other script. But the script didn't work propperly. and if I do a 'print' on the rows then I get the value 'null' But if I use a 'select * from incidents' query then i get the result from the table incidents. import database rows = database.database("INSERT INTO incidents VALUES(3 ,'test_title1', 'test', TO_DATE('25-07-2012', 'DD-MM-YYYY'), CURRENT_TIMESTAMP, 'sector', 50, 60)") #print database.database() print rows database.py script: import psycopg2 import sys import logfile def database(query): logfile.log(20, 'database.py', 'Executing...') con = None try: con = psycopg2.connect(database='incidents', user='ipfit5', password='tester') cur = con.cursor() #print query cur.execute(query) rows = cur.fetchall() con.commit() #test row does work #cur.execute("INSERT INTO incidents VALUES(3 ,'test_titel1', 'test', TO_DATE('25-07-2012', 'DD-MM-YYYY'), CURRENT_TIMESTAMP, 'sector', 50, 60)") except: logfile.log(40, 'database.py', 'Er is iets mis gegaan') logfile.log(40, 'database.py', str(sys.exc_info())) finally: if con: con.close() return rows

    Read the article

  • Hibernate ResultTransformer with JPA API

    - by Timo Westkämper
    Has anyone figured out a smart way to do query result transformation through a similar mechanism like specifying a ResultTransformer in Hibernate? All I can think of is transforming each result row after it has been returned by the Query. Is there any other way? For constructor projections (e.g. new DTO(arg1, arg2)) it can be defined in the JPQL query, at least for Hibernate, but how about other cases?

    Read the article

  • Mongodb performance on Windows

    - by Chris
    I've been researching nosql options available for .NET lately and MongoDB is emerging as a clear winner in terms of availability and support, so tonight I decided to give it a go. I downloaded version 1.2.4 (Windows x64 binary) from the mongodb site and ran it with the following options: C:\mongodb\bin>mkdir data C:\mongodb\bin>mongod -dbpath ./data --cpu --quiet I then loaded up the latest mongodb-csharp driver from http://github.com/samus/mongodb-csharp and immediately ran the benchmark program. Having heard about how "amazingly fast" MongoDB is, I was rather shocked at the poor benchmark performance. Starting Tests encode (small).........................................320000 00:00:00.0156250 encode (medium)........................................80000 00:00:00.0625000 encode (large).........................................1818 00:00:02.7500000 decode (small).........................................320000 00:00:00.0156250 decode (medium)........................................160000 00:00:00.0312500 decode (large).........................................2370 00:00:02.1093750 insert (small, no index)...............................2176 00:00:02.2968750 insert (medium, no index)..............................2269 00:00:02.2031250 insert (large, no index)...............................778 00:00:06.4218750 insert (small, indexed)................................2051 00:00:02.4375000 insert (medium, indexed)...............................2133 00:00:02.3437500 insert (large, indexed)................................835 00:00:05.9843750 batch insert (small, no index).........................53333 00:00:00.0937500 batch insert (medium, no index)........................26666 00:00:00.1875000 batch insert (large, no index).........................1114 00:00:04.4843750 find_one (small, no index).............................350 00:00:14.2812500 find_one (medium, no index)............................204 00:00:24.4687500 find_one (large, no index).............................135 00:00:37.0156250 find_one (small, indexed)..............................352 00:00:14.1718750 find_one (medium, indexed).............................184 00:00:27.0937500 find_one (large, indexed)..............................128 00:00:38.9062500 find (small, no index).................................516 00:00:09.6718750 find (medium, no index)................................316 00:00:15.7812500 find (large, no index).................................216 00:00:23.0468750 find (small, indexed)..................................532 00:00:09.3906250 find (medium, indexed).................................346 00:00:14.4375000 find (large, indexed)..................................212 00:00:23.5468750 find range (small, indexed)............................440 00:00:11.3593750 find range (medium, indexed)...........................294 00:00:16.9531250 find range (large, indexed)............................199 00:00:25.0625000 Press any key to continue... For starters, I can get better non-batch insert performance from SQL Server Express. What really struck me, however, was the slow performance of the find_nnnn queries. Why is retrieving data from MongoDB so slow? What am I missing? Edit: This was all on the local machine, no network latency or anything. MongoDB's CPU usage ran at about 75% the entire time the test was running. Edit 2: Also, I ran a trace on the benchmark program and confirmed that 50% of the CPU time spent was waiting for MongoDB to return data, so it's not a performance issue with the C# driver.

    Read the article

  • How to monitor CPU usage and performance on a Hyper-V server with several VM's

    - by Bjørn
    Hello, I have a server that is running Windows 2008 64 bit Hyper-V, with 8 gigs of RAM and Intel Xeon X3440 @ 2.53 Ghz, which gives me 8 logical cores in the performance monitor on the host system. I have set up three Virtual Machines, all running Windows 2008 32 bit. Build server, running Team City Staging server SQL Server, running SQL Server 2005 I have some troubles with the setup in that the host monitor remains responsive at all times, even though the VM's are seemingly working at 100% cpu and are very sluggish and unresponsive. (I have asked a separate question about that.) So the question here is: What is the best way to monitor how the physical CPU's are actually utilized? The reason I am asking is that I am being told that i cannot reliably use the task manager to monitor CPU usage in a VM.

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >