Search Results

Search found 2042 results on 82 pages for 'average'.

Page 17/82 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Dirt compression from vehicle tires

    - by Mungoid
    So I kinda have this working but its not correct because it just averages, so I wanted to know if anyone here has any ideas. I'm trying to simulate loose dirt compression under the tires of a vehicle to reduce the potential bumpiness of 'chunky' terrain. Currently how I do this is that I have a bounding box shape around my tires, set a little lower so they intersect with the terrain. Each frame, I (currently) average all of the heights of each point in the terrain that are within the box bounds of that tire, and then set them all to that average. Clearly this won't work in most cases because, for example, if i'm on a hill, the terrain will deform way too much. One way I thought was to have a max and min amount the points could raise and lower but that still doesn't seem to work properly and sometimes looks more like steps than smooth dirt. I wanna say that there is probably a bit more to this that what i'm currently doing but I am not sure where to look. Could anyone here shed some light on this subject? Would I benefit any by maybe looking up some smoothing algorith or something similar?

    Read the article

  • At what point does programming become a useful skill?

    - by Elip
    This is probably a very difficult question to answer, because of its subjectivity, but even a vague guess would help me out: Now that Khan Academy is beginning to offer Computer Science lectures I'm getting an itch to learn programming again. I maybe am a bit more technical than your average computer user, using Ubuntu as my OS, LaTeX for writing and I know some small tricks like regular expressions or boolean search for google. However from my previous attempts to learn programming, I realized I do not have a natural aptitude for it and I also don't seem to enjoy the process. But I am fairly certain that a basic proficiency in programming could prove to be very beneficial for me career wise; I also often get ideas for little scripts that I cannot implement. My question is: Let's say you study programming 1 hour / day on average. At what point will you become good enough so that programming can be used for automating tasks and actually saving time? Do you think programming is worth picking up if you never have the ambition to make it your career or even your hobby, but use it strictly for utility purposes?

    Read the article

  • SQL SERVER – PAGEIOLATCH_DT, PAGEIOLATCH_EX, PAGEIOLATCH_KP, PAGEIOLATCH_SH, PAGEIOLATCH_UP – Wait Type – Day 9 of 28

    - by pinaldave
    It is very easy to say that you replace your hardware as that is not up to the mark. In reality, it is very difficult to implement. It is really hard to convince an infrastructure team to change any hardware because they are not performing at their best. I had a nightmare related to this issue in a deal with an infrastructure team as I suggested that they replace their faulty hardware. This is because they were initially not accepting the fact that it is the fault of their hardware. But it is really easy to say “Trust me, I am correct”, while it is equally important that you put some logical reasoning along with this statement. PAGEIOLATCH_XX is such a kind of those wait stats that we would directly like to blame on the underlying subsystem. Of course, most of the time, it is correct – the underlying subsystem is usually the problem. From Book On-Line: PAGEIOLATCH_DT Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Destroy mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_EX Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Exclusive mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_KP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Keep mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_SH Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Shared mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_UP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Update mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_XX Explanation: Simply put, this particular wait type occurs when any of the tasks is waiting for data from the disk to move to the buffer cache. ReducingPAGEIOLATCH_XX wait: Just like any other wait type, this is again a very challenging and interesting subject to resolve. Here are a few things you can experiment on: Improve your IO subsystem speed (read the first paragraph of this article, if you have not read it, I repeat that it is easy to say a step like this than to actually implement or do it). This type of wait stats can also happen due to memory pressure or any other memory issues. Putting aside the issue of a faulty IO subsystem, this wait type warrants proper analysis of the memory counters. If due to any reasons, the memory is not optimal and unable to receive the IO data. This situation can create this kind of wait type. Proper placing of files is very important. We should check file system for the proper placement of files – LDF and MDF on separate drive, TempDB on separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. It is very possible that there are no proper indexes on the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can significantly reduce lots of CPU, Memory and IO (considering cover index has much lesser columns than cluster table and all other it depends conditions). You can refer to the two articles’ links below previously written by me that talk about how to optimize indexes. Create Missing Indexes Drop Unused Indexes Updating statistics can help the Query Optimizer to render optimal plan, which can only be either directly or indirectly. I have seen that updating statistics with full scan (again, if your database is huge and you cannot do this – never mind!) can provide optimal information to SQL Server optimizer leading to efficient plan. Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All of the discussions of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – LCK_M_XXX – Wait Type – Day 15 of 28

    - by pinaldave
    Locking is a mechanism used by the SQL Server Database Engine to synchronize access by multiple users to the same piece of data, at the same time. In simpler words, it maintains the integrity of data by protecting (or preventing) access to the database object. From Book On-Line: LCK_M_BU Occurs when a task is waiting to acquire a Bulk Update (BU) lock. LCK_M_IS Occurs when a task is waiting to acquire an Intent Shared (IS) lock. LCK_M_IU Occurs when a task is waiting to acquire an Intent Update (IU) lock. LCK_M_IX Occurs when a task is waiting to acquire an Intent Exclusive (IX) lock. LCK_M_S Occurs when a task is waiting to acquire a Shared lock. LCK_M_SCH_M Occurs when a task is waiting to acquire a Schema Modify lock. LCK_M_SCH_S Occurs when a task is waiting to acquire a Schema Share lock. LCK_M_SIU Occurs when a task is waiting to acquire a Shared With Intent Update lock. LCK_M_SIX Occurs when a task is waiting to acquire a Shared With Intent Exclusive lock. LCK_M_U Occurs when a task is waiting to acquire an Update lock. LCK_M_UIX Occurs when a task is waiting to acquire an Update With Intent Exclusive lock. LCK_M_X Occurs when a task is waiting to acquire an Exclusive lock. LCK_M_XXX Explanation: I think the explanation of this wait type is the simplest. When any task is waiting to acquire lock on any resource, this particular wait type occurs. The common reason for the task to be waiting to put lock on the resource is that the resource is already locked and some other operations may be going on within it. This wait also indicates that resources are not available or are occupied at the moment due to some reasons. There is a good chance that the waiting queries start to time out if this wait type is very high. Client application may degrade the performance as well. You can use various methods to find blocking queries: EXEC sp_who2 SQL SERVER – Quickest Way to Identify Blocking Query and Resolution – Dirty Solution DMV – sys.dm_tran_locks DMV – sys.dm_os_waiting_tasks Reducing LCK_M_XXX wait: Check the Explicit Transactions. If transactions are very long, this wait type can start building up because of other waiting transactions. Keep the transactions small. Serialization Isolation can build up this wait type. If that is an acceptable isolation for your business, this wait type may be natural. The default isolation of SQL Server is ‘Read Committed’. One of my clients has changed their isolation to “Read Uncommitted”. I strongly discourage the use of this because this will probably lead to having lots of dirty data in the database. Identify blocking queries mentioned using various methods described above, and then optimize them. Partition can be one of the options to consider because this will allow transactions to execute concurrently on different partitions. If there are runaway queries, use timeout. (Please discuss this solution with your database architect first as timeout can work against you). Check if there is no memory and IO-related issue using the following counters: Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – SSMS: Backup and Restore Events Report

    - by Pinal Dave
    A DBA wears multiple hats and in fact does more than what an eye can see. One of the core task of a DBA is to take backups. This looks so trivial that most developers shrug this off as the only activity a DBA might be doing. I have huge respect for DBA’s all around the world because even if they seem cool with all the scripting, automation, maintenance works round the clock to keep the business working almost 365 days 24×7, their worth is knowing that one day when the systems / HDD crashes and you have an important delivery to make. So these backup tasks / maintenance jobs that have been done come handy and are no more trivial as they might seem to be as considered by many. So the important question like: “When was the last backup taken?”, “How much time did the last backup take?”, “What type of backup was taken last?” etc are tricky questions and this report lands answers to the same in a jiffy. So the SSMS report, we are talking can be used to find backups and restore operation done for the selected database. Whenever we perform any backup or restore operation, the information is stored in the msdb database. This report can utilize that information and provide information about the size, time taken and also the file location for those operations. Here is how this report can be launched.   Once we launch this report, we can see 4 major sections shown as listed below. Average Time Taken For Backup Operations Successful Backup Operations Backup Operation Errors Successful Restore Operations Let us look at each section next. Average Time Taken For Backup Operations Information shown in “Average Time Taken For Backup Operations” section is taken from a backupset table in the msdb database. Here is the query and the expanded version of that particular section USE msdb; SELECT (ROW_NUMBER() OVER (ORDER BY t1.TYPE))%2 AS l1 ,       1 AS l2 ,       1 AS l3 ,       t1.TYPE AS [type] ,       (AVG(DATEDIFF(ss,backup_start_date, backup_finish_date)))/60.0 AS AverageBackupDuration FROM backupset t1 INNER JOIN sys.databases t3 ON ( t1.database_name = t3.name) WHERE t3.name = N'AdventureWorks2014' GROUP BY t1.TYPE ORDER BY t1.TYPE On my small database the time taken for differential backup was less than a minute, hence the value of zero is displayed. This is an important piece of backup operation which might help you in planning maintenance windows. Successful Backup Operations Here is the expanded version of this section.   This information is derived from various backup tracking tables from msdb database.  Here is the simplified version of the query which can be used separately as well. SELECT * FROM sys.databases t1 INNER JOIN backupset t3 ON (t3.database_name = t1.name) LEFT OUTER JOIN backupmediaset t5 ON ( t3.media_set_id = t5.media_set_id) LEFT OUTER JOIN backupmediafamily t6 ON ( t6.media_set_id = t5.media_set_id) WHERE (t1.name = N'AdventureWorks2014') ORDER BY backup_start_date DESC,t3.backup_set_id,t6.physical_device_name; The report does some calculations to show the data in a more readable format. For example, the backup size is shown in KB, MB or GB. I have expanded first row by clicking on (+) on “Device type” column. That has shown me the path of the physical backup file. Personally looking at this section, the Backup Size, Device Type and Backup Name are critical and are worth a note. As mentioned in the previous section, this section also has the Duration embedded inside it. Backup Operation Errors This section of the report gets data from default trace. You might wonder how. One of the event which is tracked by default trace is “ErrorLog”. This means that whatever message is written to errorlog gets written to default trace file as well. Interestingly, whenever there is a backup failure, an error message is written to ERRORLOG and hence default trace. This section takes advantage of that and shows the information. We can read below message under this section, which confirms above logic. No backup operations errors occurred for (AdventureWorks2014) database in the recent past or default trace is not enabled. Successful Restore Operations This section may not be very useful in production server (do you perform a restore of database?) but might be useful in the development and log shipping secondary environment, where we might be interested to see restore operations for a particular database. Here is the expanded version of the section. To fill this section of the report, I have restored the same backups which were taken to populate earlier sections. Here is the simplified version of the query used to populate this output. USE msdb; SELECT * FROM restorehistory t1 LEFT OUTER JOIN restorefile t2 ON ( t1.restore_history_id = t2.restore_history_id) LEFT OUTER JOIN backupset t3 ON ( t1.backup_set_id = t3.backup_set_id) WHERE t1.destination_database_name = N'AdventureWorks2014' ORDER BY restore_date DESC,  t1.restore_history_id,t2.destination_phys_name Have you ever looked at the backup strategy of your key databases? Are they in sync and do we have scope for improvements? Then this is the report to analyze after a week or month of maintenance plans running in your database. Do chime in with what are the strategies you are using in your environments. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • Sun Fire X4800 M2 Delivers World Record TPC-C for x86 Systems

    - by Brian
    Oracle's Sun Fire X4800 M2 server equipped with eight 2.4 GHz Intel Xeon Processor E7-8870 chips obtained a result of 5,055,888 tpmC on the TPC-C benchmark. This result is a world record for x86 servers. Oracle demonstrated this world record database performance running Oracle Database 11g Release 2 Enterprise Edition with Partitioning. The Sun Fire X4800 M2 server delivered a new x86 TPC-C world record of 5,055,888 tpmC with a price performance of $0.89/tpmC using Oracle Database 11g Release 2. This configuration is available 06/26/12. The Sun Fire X4800 M2 server delivers 3.0x times better performance than the next 8-processor result, an IBM System p 570 equipped with POWER6 processors. The Sun Fire X4800 M2 server has 3.1x times better price/performance than the 8-processor 4.7GHz POWER6 IBM System p 570. The Sun Fire X4800 M2 server has 1.6x times better performance than the 4-processor IBM x3850 X5 system equipped with Intel Xeon processors. This is the first TPC-C result on any system using eight Intel Xeon Processor E7-8800 Series chips. The Sun Fire X4800 M2 server is the first x86 system to get over 5 million tpmC. The Oracle solution utilized Oracle Linux operating system and Oracle Database 11g Enterprise Edition Release 2 with Partitioning to produce the x86 world record TPC-C benchmark performance. Performance Landscape Select TPC-C results (sorted by tpmC, bigger is better) System p/c/t tpmC Price/tpmC Avail Database MemorySize Sun Fire X4800 M2 8/80/160 5,055,888 0.89 USD 6/26/2012 Oracle 11g R2 4 TB IBM x3850 X5 4/40/80 3,014,684 0.59 USD 7/11/2011 DB2 ESE 9.7 3 TB IBM x3850 X5 4/32/64 2,308,099 0.60 USD 5/20/2011 DB2 ESE 9.7 1.5 TB IBM System p 570 8/16/32 1,616,162 3.54 USD 11/21/2007 DB2 9.0 2 TB p/c/t - processors, cores, threads Avail - availability date Oracle and IBM TPC-C Response times System tpmC Response Time (sec) New Order 90th% Response Time (sec) New Order Average Sun Fire X4800 M2 5,055,888 0.210 0.166 IBM x3850 X5 3,014,684 0.500 0.272 Ratios - Oracle Better 1.6x 1.4x 1.3x Oracle uses average new order response time for comparison between Oracle and IBM. Graphs of Oracle's and IBM's response times for New-Order can be found in the full disclosure reports on TPC's website TPC-C Official Result Page. Configuration Summary and Results Hardware Configuration: Server Sun Fire X4800 M2 server 8 x 2.4 GHz Intel Xeon Processor E7-8870 4 TB memory 8 x 300 GB 10K RPM SAS internal disks 8 x Dual port 8 Gbs FC HBA Data Storage 10 x Sun Fire X4270 M2 servers configured as COMSTAR heads, each with 1 x 3.06 GHz Intel Xeon X5675 processor 8 GB memory 10 x 2 TB 7.2K RPM 3.5" SAS disks 2 x Sun Storage F5100 Flash Array storage (1.92 TB each) 1 x Brocade 5300 switches Redo Storage 2 x Sun Fire X4270 M2 servers configured as COMSTAR heads, each with 1 x 3.06 GHz Intel Xeon X5675 processor 8 GB memory 11 x 2 TB 7.2K RPM 3.5" SAS disks Clients 8 x Sun Fire X4170 M2 servers, each with 2 x 3.06 GHz Intel Xeon X5675 processors 48 GB memory 2 x 300 GB 10K RPM SAS disks Software Configuration: Oracle Linux (Sun Fire 4800 M2) Oracle Solaris 11 Express (COMSTAR for Sun Fire X4270 M2) Oracle Solaris 10 9/10 (Sun Fire X4170 M2) Oracle Database 11g Release 2 Enterprise Edition with Partitioning Oracle iPlanet Web Server 7.0 U5 Tuxedo CFS-R Tier 1 Results: System: Sun Fire X4800 M2 tpmC: 5,055,888 Price/tpmC: 0.89 USD Available: 6/26/2012 Database: Oracle Database 11g Cluster: no New Order Average Response: 0.166 seconds Benchmark Description TPC-C is an OLTP system benchmark. It simulates a complete environment where a population of terminal operators executes transactions against a database. The benchmark is centered around the principal activities (transactions) of an order-entry environment. These transactions include entering and delivering orders, recording payments, checking the status of orders, and monitoring the level of stock at the warehouses. Key Points and Best Practices Oracle Database 11g Release 2 Enterprise Edition with Partitioning scales easily to this high level of performance. COMSTAR (Common Multiprotocol SCSI Target) is the software framework that enables an Oracle Solaris host to serve as a SCSI Target platform. COMSTAR uses a modular approach to break the huge task of handling all the different pieces in a SCSI target subsystem into independent functional modules which are glued together by the SCSI Target Mode Framework (STMF). The modules implementing functionality at SCSI level (disk, tape, medium changer etc.) are not required to know about the underlying transport. And the modules implementing the transport protocol (FC, iSCSI, etc.) are not aware of the SCSI-level functionality of the packets they are transporting. The framework hides the details of allocation providing execution context and cleanup of SCSI commands and associated resources and simplifies the task of writing the SCSI or transport modules. Oracle iPlanet Web Server middleware is used for the client tier of the benchmark. Each web server instance supports more than a quarter-million users while satisfying the response time requirement from the TPC-C benchmark. See Also Oracle Press Release -- Sun Fire X4800 M2 TPC-C Executive Summary tpc.org Complete Sun Fire X4800 M2 TPC-C Full Disclosure Report tpc.org Transaction Processing Performance Council (TPC) Home Page Ideas International Benchmark Page Sun Fire X4800 M2 Server oracle.com OTN Oracle Linux oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Sun Storage F5100 Flash Array oracle.com OTN Disclosure Statement TPC Benchmark C, tpmC, and TPC-C are trademarks of the Transaction Processing Performance Council (TPC). Sun Fire X4800 M2 (8/80/160) with Oracle Database 11g Release 2 Enterprise Edition with Partitioning, 5,055,888 tpmC, $0.89 USD/tpmC, available 6/26/2012. IBM x3850 X5 (4/40/80) with DB2 ESE 9.7, 3,014,684 tpmC, $0.59 USD/tpmC, available 7/11/2011. IBM x3850 X5 (4/32/64) with DB2 ESE 9.7, 2,308,099 tpmC, $0.60 USD/tpmC, available 5/20/2011. IBM System p 570 (8/16/32) with DB2 9.0, 1,616,162 tpmC, $3.54 USD/tpmC, available 11/21/2007. Source: http://www.tpc.org/tpcc, results as of 7/15/2011.

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • Mounting ddrescue image after recovery (in over my head)

    - by BorgDomination
    I'm having problems mounting the recovery image. I've tried to mount the image multiple ways. quark@DS9 ~ $ sudo mount -t ext4 /media/jump1/1recover/sdb1.img /mnt mount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so quark@DS9 ~ $ sudo mount -r -o loop /media/jump1/1recover/sdb1.img recover mount: you must specify the filesystem type quark@DS9 ~ $ sudo mount /media/jump1/1recover/sdb1.img mnt mount: you must specify the filesystem type It doesn't even give me detailed information on the file I just made, nautilus says it's 160gb. quark@DS9 ~ $ file /media/jump1/1recover/sdb1.img /media/jump1/1recover/sdb1.img: data quark@DS9 ~ $ mmls /media/jump1/1recover/sdb1.img Cannot determine partition type I'm not sure what I'm doing wrong or if I started this process incorrectly from the beginning. I've outlined what I've done so far below. I'm clueless, I'd appreciate if someone had some input for me. What I have done from the beginning My laptop has two hard drives. One has the dual boot Win7 / Linux Mint system files. Secondary one contained my /home folder. The laptop was jarred and the /home disk was broken. I tried a LiveCD recovery, it failed. Wouldn't even load a Live session with the disk installed. So I turned to ddrescue. quark@DS9 ~ $ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009fc18 Device Boot Start End Blocks Id System /dev/sda1 * 2048 112642047 56320000 7 HPFS/NTFS/exFAT /dev/sda2 138033152 312580095 87273472 83 Linux /dev/sda3 112644094 138033151 12694529 5 Extended /dev/sda5 112644096 132173823 9764864 83 Linux /dev/sda6 132175872 138033151 2928640 82 Linux swap / Solaris Partition table entries are not in disk order Disk /dev/sdb: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002a8ea Device Boot Start End Blocks Id System /dev/sdb1 * 63 312576704 156288321 83 Linux Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xed6d054b Device Boot Start End Blocks Id System /dev/sdc1 63 1953520064 976760001 7 HPFS/NTFS/exFAT sda - 160g internal, holds all system files and all computer functions. sdb - 160g internal, BROKEN, contains about 140g of data I'd like to recover. sdc - 1T external, contains recovery image. Only place that has space to do all this. From this site, https://apps.education.ucsb.edu/wiki/Ddrescue I used this script to create an image of the broken hard drive. I changed the destination to the external USB drive. #!/bin/sh prt=sdb1 src=/dev/$prt dst=/media/jump1/1recover/$prt.img log=$dst.log sudo time ddrescue --no-split $src $dst $log sudo time ddrescue --direct --max-retries=3 $src $dst $log sudo time ddrescue --direct --retrim --max-retries=3 $src $dst $log Everything looked like it came off without a hitch: quark@DS9 ~ $ sudo bash recover1 Press Ctrl-C to interrupt Initial status (read from logfile) rescued: 0 B, errsize: 0 B, errors: 0 Current status rescued: 160039 MB, errsize: 4096 B, current rate: 35588 B/s ipos: 3584 B, errors: 1, average rate: 22859 kB/s opos: 3584 B, time from last successful read: 0 s Finished 12.78user 1060.42system 1:56:41elapsed 15%CPU (0avgtext+0avgdata 4944maxresident)k 312580958inputs+0outputs (1major+601minor)pagefaults 0swaps Press Ctrl-C to interrupt Initial status (read from logfile) rescued: 160039 MB, errsize: 4096 B, errors: 1 Current status rescued: 160039 MB, errsize: 1024 B, current rate: 0 B/s ipos: 1536 B, errors: 1, average rate: 13 B/s opos: 1536 B, time from last successful read: 1.3 m Finished 0.00user 0.00system 3:43.95elapsed 0%CPU (0avgtext+0avgdata 4944maxresident)k 238inputs+0outputs (3major+374minor)pagefaults 0swaps Press Ctrl-C to interrupt Initial status (read from logfile) rescued: 160039 MB, errsize: 1024 B, errors: 1 Current status rescued: 160039 MB, errsize: 1024 B, current rate: 0 B/s ipos: 1536 B, errors: 1, average rate: 0 B/s opos: 1536 B, time from last successful read: 3.7 m Finished 0.00user 0.00system 3:43.56elapsed 0%CPU (0avgtext+0avgdata 4944maxresident)k 8inputs+0outputs (0major+376minor)pagefaults 0swaps It looks like, from where I'm standing it worked perfectly. Here's the log: # Rescue Logfile. Created by GNU ddrescue version 1.14 # Command line: ddrescue --direct --retrim --max-retries=3 /dev/sdb1 /media/jump1/1recover/sdb1.img /media/jump1/1recover/sdb1.img.log # current_pos current_status 0x00000600 + # pos size status 0x00000000 0x00000400 + 0x00000400 0x00000400 - 0x00000800 0x254314FC00 + I'm not sure how to proceed. Does this mean all of my data is lost???????? Appreciate ANY input!

    Read the article

  • [.NET Performance Counter] "System.ComponentModel.Win32Exception: Access is denied" is thrown when c

    - by Ricky
    Hi guys: Calling PerformanceCounterCategory.Create() below on my machine thorws out this exception: System.ComponentModel.Win32Exception: Access is denied Do you know what's the issue about it? Thank you! if (!PerformanceCounterCategory.Exists("MyCategory")) { CounterCreationDataCollection counters = new CounterCreationDataCollection(); CounterCreationData avgDurationBase = new CounterCreationData(); avgDurationBase.CounterName = "average time per operation base"; avgDurationBase.CounterHelp = "Average duration per operation execution base"; avgDurationBase.CounterType = PerformanceCounterType.AverageBase; counters.Add(avgDurationBase); // create new category with the counters above PerformanceCounterCategory.Create("MyCategory", "Sample category for Codeproject", PerformanceCounterCategoryType.SingleInstance, counters); }

    Read the article

  • Help analyzing traceroute

    - by Abdulla
    Hello, my name is Abdulla and I'm from Kuwait. Sorry for my question as I know its not technically challenging. I'm facing some problems with my internet connection. My company has a DSL 2mb connection. My main problem is latency, in the morning its good but after that its gets really bad. My Internet provider says there's nothing wrong and that everything is working perfectly. I tried to explain to them the latency issue but they say that as long as I'm getting the download speed there isn't anything I can do about it. I only want to know if this is true and that the company can't do anything before I change my internet provider, as I feel that the guys at the contact center might getting back to me without asking tech support. Below are 2 traces I made, one in the morning and the other in the afternoon: This was taken around 17:00 Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\Documents and Settings\Administrator>ping google.com Pinging google.com [66.102.9.104] with 32 bytes of data: Reply from 66.102.9.104: bytes=32 time=387ms TTL=49 Reply from 66.102.9.104: bytes=32 time=388ms TTL=49 Reply from 66.102.9.104: bytes=32 time=375ms TTL=49 Reply from 66.102.9.104: bytes=32 time=375ms TTL=49 Ping statistics for 66.102.9.104: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 375ms, Maximum = 388ms, Average = 381ms C:\Documents and Settings\Administrator>ping google.com /t Pinging google.com [66.102.9.104] with 32 bytes of data: Reply from 66.102.9.104: bytes=32 time=376ms TTL=49 Reply from 66.102.9.104: bytes=32 time=382ms TTL=49 Reply from 66.102.9.104: bytes=32 time=371ms TTL=49 Reply from 66.102.9.104: bytes=32 time=378ms TTL=49 Reply from 66.102.9.104: bytes=32 time=374ms TTL=49 Reply from 66.102.9.104: bytes=32 time=371ms TTL=49 Reply from 66.102.9.104: bytes=32 time=365ms TTL=49 Reply from 66.102.9.104: bytes=32 time=366ms TTL=49 Reply from 66.102.9.104: bytes=32 time=353ms TTL=49 Reply from 66.102.9.104: bytes=32 time=331ms TTL=49 Reply from 66.102.9.104: bytes=32 time=333ms TTL=49 Reply from 66.102.9.104: bytes=32 time=348ms TTL=49 Reply from 66.102.9.104: bytes=32 time=365ms TTL=49 Reply from 66.102.9.104: bytes=32 time=346ms TTL=49 Reply from 66.102.9.104: bytes=32 time=335ms TTL=49 Reply from 66.102.9.104: bytes=32 time=340ms TTL=49 Reply from 66.102.9.104: bytes=32 time=344ms TTL=49 Reply from 66.102.9.104: bytes=32 time=333ms TTL=49 Reply from 66.102.9.104: bytes=32 time=328ms TTL=49 Reply from 66.102.9.104: bytes=32 time=332ms TTL=49 Reply from 66.102.9.104: bytes=32 time=326ms TTL=49 Reply from 66.102.9.104: bytes=32 time=333ms TTL=49 Reply from 66.102.9.104: bytes=32 time=325ms TTL=49 Reply from 66.102.9.104: bytes=32 time=333ms TTL=49 Reply from 66.102.9.104: bytes=32 time=338ms TTL=49 Reply from 66.102.9.104: bytes=32 time=341ms TTL=49 Ping statistics for 66.102.9.104: Packets: Sent = 26, Received = 26, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 325ms, Maximum = 382ms, Average = 348ms Control-C ^C C:\Documents and Settings\Administrator>travert google.com 'travert' is not recognized as an internal or external command, operable program or batch file. C:\Documents and Settings\Administrator>tracert google.com Tracing route to google.com [66.102.9.104] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 192.168.0.1 2 6 ms 6 ms 6 ms 80-184-31-1.adsl.kems.net [80.184.31.1] 3 7 ms 7 ms 8 ms 168.187.0.226 4 7 ms 8 ms 9 ms 168.187.0.125 5 180 ms 187 ms 188 ms if-11-2.core1.RSD-Riyad.as6453.net [116.0.78.89] 6 209 ms 222 ms 204 ms 195.219.167.57 7 541 ms 536 ms 540 ms 195.219.167.42 8 553 ms 552 ms 538 ms Vlan1102.icore1.PVU-Paris.as6453.net [195.219.24 1.109] 9 547 ms 543 ms 542 ms xe-9-1-0.edge4.paris1.level3.net [4.68.110.213] 10 540 ms 523 ms 531 ms ae-33-51.ebr1.Paris1.Level3.net [4.69.139.193] 11 755 ms 761 ms 695 ms ae-45-45.ebr1.London1.Level3.net [4.69.143.101] 12 271 ms 263 ms 400 ms ae-11-51.car1.London1.Level3.net [4.69.139.66] 13 701 ms 730 ms 742 ms 195.50.118.210 14 659 ms 641 ms 660 ms 209.85.255.76 15 280 ms 283 ms 292 ms 209.85.251.190 16 308 ms 293 ms 296 ms 72.14.232.239 17 679 ms 700 ms 721 ms 64.233.174.18 18 268 ms 281 ms 269 ms lm-in-f104.1e100.net [66.102.9.104] Trace complete. C:\Documents and Settings\Administrator> This was taken at 10:00am Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\Documents and Settings\Administrator>ping google.com Pinging google.com [66.102.9.106] with 32 bytes of data: Reply from 66.102.9.106: bytes=32 time=110ms TTL=49 Reply from 66.102.9.106: bytes=32 time=111ms TTL=49 Reply from 66.102.9.106: bytes=32 time=112ms TTL=49 Reply from 66.102.9.106: bytes=32 time=120ms TTL=49 Ping statistics for 66.102.9.106: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 110ms, Maximum = 120ms, Average = 113ms C:\Documents and Settings\Administrator>ping google.com /t Pinging google.com [66.102.9.106] with 32 bytes of data: Reply from 66.102.9.106: bytes=32 time=109ms TTL=49 Reply from 66.102.9.106: bytes=32 time=110ms TTL=49 Reply from 66.102.9.106: bytes=32 time=111ms TTL=49 Reply from 66.102.9.106: bytes=32 time=111ms TTL=49 Reply from 66.102.9.106: bytes=32 time=112ms TTL=49 Reply from 66.102.9.106: bytes=32 time=112ms TTL=49 Reply from 66.102.9.106: bytes=32 time=116ms TTL=49 Reply from 66.102.9.106: bytes=32 time=110ms TTL=49 Reply from 66.102.9.106: bytes=32 time=109ms TTL=49 Reply from 66.102.9.106: bytes=32 time=110ms TTL=49 Reply from 66.102.9.106: bytes=32 time=109ms TTL=49 Reply from 66.102.9.106: bytes=32 time=110ms TTL=49 Reply from 66.102.9.106: bytes=32 time=112ms TTL=49 Reply from 66.102.9.106: bytes=32 time=109ms TTL=49 Reply from 66.102.9.106: bytes=32 time=110ms TTL=49 Reply from 66.102.9.106: bytes=32 time=115ms TTL=49 Reply from 66.102.9.106: bytes=32 time=110ms TTL=49 Reply from 66.102.9.106: bytes=32 time=109ms TTL=49 Reply from 66.102.9.106: bytes=32 time=110ms TTL=49 Reply from 66.102.9.106: bytes=32 time=113ms TTL=49 Reply from 66.102.9.106: bytes=32 time=115ms TTL=49 Reply from 66.102.9.106: bytes=32 time=109ms TTL=49 Reply from 66.102.9.106: bytes=32 time=110ms TTL=49 Ping statistics for 66.102.9.106: Packets: Sent = 32, Received = 32, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 109ms, Maximum = 135ms, Average = 112ms Control-C ^C C:\Documents and Settings\Administrator>tracert google.com Tracing route to google.com [66.102.9.104] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 192.168.0.1 2 6 ms 6 ms 6 ms 80-184-31-1.adsl.kems.net [80.184.31.1] 3 8 ms 7 ms 6 ms 168.187.0.226 4 6 ms 7 ms 7 ms 168.187.0.125 5 20 ms 20 ms 18 ms if-11-2.core1.RSD-Riyad.as6453.net [116.0.78.89] 6 171 ms 205 ms 215 ms 195.219.167.57 7 191 ms 215 ms 226 ms 195.219.167.42 8 * 103 ms 94 ms Vlan1102.icore1.PVU-Paris.as6453.net [195.219.24 1.109] 9 94 ms 95 ms 97 ms xe-9-1-0.edge4.paris1.level3.net [4.68.110.213] 10 94 ms 94 ms 94 ms ae-33-51.ebr1.Paris1.Level3.net [4.69.139.193] 11 101 ms 101 ms 101 ms ae-48-48.ebr1.London1.Level3.net [4.69.143.113] 12 102 ms 102 ms 101 ms ae-11-51.car1.London1.Level3.net [4.69.139.66] 13 103 ms 102 ms 103 ms 195.50.118.210 14 137 ms 103 ms 100 ms 209.85.255.76 15 130 ms 124 ms 124 ms 209.85.251.190 16 114 ms 116 ms 116 ms 72.14.232.239 17 135 ms 113 ms 126 ms 64.233.174.18 18 126 ms 125 ms 127 ms lm-in-f104.1e100.net [66.102.9.104] Trace complete. C:\Documents and Settings\Administrator>

    Read the article

  • Translation of clustering problem to graph theory language

    - by honk
    I have a rectangular planar grid, with each cell assigned some integer weight. I am looking for an algorithm to identify clusters of 3 to 6 adjacent cells with higher-than-average weight. These blobs should have approximately circular shape. For my case the average weight of the cells not containing a cluster is around 6, and that for cells containing a cluster is around 6+4, i.e. there is a "background weight" somewhere around 6. The weights fluctuate with a Poisson statistic. For small background greedy or seeded algorithms perform pretty well, but this breaks down if my cluster cells have weights close to fluctuations in the background. Also, I cannot do a brute-force search looping through all possible setups because my grid is large (something like 1000x1000). I have the impression there might exist ways to tackle this in graph theory. I heard of vertex-covers and cliques, but am not sure how to best translate my problem into their language.

    Read the article

  • sequential search homework question

    - by Phenom
    Consider a disk file containing 100 records a. How many comparisons would be required on the average to find a record using sequential search, if the record is known to be in the file? I figured out that this is 100/2 = 50. b. If the record has a 68% probability of being in the file, how many comparisons are required on average? This is the part I'm having trouble with. At first I thought it was 68% * 50, but then realized that was wrong after thinking about it. Then I thought it was (100% - 68%) * 50, but I still feel that that is wrong. Any hints?

    Read the article

  • Please bear with me, can someone analyze this trace route please

    - by Abdulla
    Hello, my name is Abdulla and I'm from Kuwait. Sorry for my question as I know its not technically challenging. I'm facing some problems with my internet connection while gaming, I have DSL 2mb connection. My main problem is latency, in the morning its good but after that its gets really bad. My internet provider says there's nothing wrong and that everything is working perfectly. I tried to explain to them the latency issue but they say that as long as I'm getting the download speed there isn't anything I can do about it. I only want to know if this is true and that the company can't do anything before I change my internet provider, as I feel that the guys at the contact center might getting back to me without asking tech support. Below are 2 traces I made, one in the morning and the other in the afternoon: This was taken around 17:00 Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\Documents and Settings\Administrator>ping google.com Pinging google.com [66.102.9.104] with 32 bytes of data: Reply from 66.102.9.104: bytes=32 time=387ms TTL=49 Reply from 66.102.9.104: bytes=32 time=388ms TTL=49 Reply from 66.102.9.104: bytes=32 time=375ms TTL=49 Reply from 66.102.9.104: bytes=32 time=375ms TTL=49 Ping statistics for 66.102.9.104: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 375ms, Maximum = 388ms, Average = 381ms C:\Documents and Settings\Administrator>ping google.com /t Pinging google.com [66.102.9.104] with 32 bytes of data: Reply from 66.102.9.104: bytes=32 time=376ms TTL=49 Reply from 66.102.9.104: bytes=32 time=382ms TTL=49 Reply from 66.102.9.104: bytes=32 time=371ms TTL=49 Reply from 66.102.9.104: bytes=32 time=378ms TTL=49 Reply from 66.102.9.104: bytes=32 time=374ms TTL=49 Reply from 66.102.9.104: bytes=32 time=371ms TTL=49 Reply from 66.102.9.104: bytes=32 time=365ms TTL=49 Reply from 66.102.9.104: bytes=32 time=366ms TTL=49 Reply from 66.102.9.104: bytes=32 time=353ms TTL=49 Reply from 66.102.9.104: bytes=32 time=331ms TTL=49 Reply from 66.102.9.104: bytes=32 time=333ms TTL=49 Reply from 66.102.9.104: bytes=32 time=348ms TTL=49 Reply from 66.102.9.104: bytes=32 time=365ms TTL=49 Reply from 66.102.9.104: bytes=32 time=346ms TTL=49 Reply from 66.102.9.104: bytes=32 time=335ms TTL=49 Reply from 66.102.9.104: bytes=32 time=340ms TTL=49 Reply from 66.102.9.104: bytes=32 time=344ms TTL=49 Reply from 66.102.9.104: bytes=32 time=333ms TTL=49 Reply from 66.102.9.104: bytes=32 time=328ms TTL=49 Reply from 66.102.9.104: bytes=32 time=332ms TTL=49 Reply from 66.102.9.104: bytes=32 time=326ms TTL=49 Reply from 66.102.9.104: bytes=32 time=333ms TTL=49 Reply from 66.102.9.104: bytes=32 time=325ms TTL=49 Reply from 66.102.9.104: bytes=32 time=333ms TTL=49 Reply from 66.102.9.104: bytes=32 time=338ms TTL=49 Reply from 66.102.9.104: bytes=32 time=341ms TTL=49 Ping statistics for 66.102.9.104: Packets: Sent = 26, Received = 26, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 325ms, Maximum = 382ms, Average = 348ms Control-C ^C C:\Documents and Settings\Administrator>travert google.com 'travert' is not recognized as an internal or external command, operable program or batch file. C:\Documents and Settings\Administrator>tracert google.com Tracing route to google.com [66.102.9.104] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 192.168.0.1 2 6 ms 6 ms 6 ms 80-184-31-1.adsl.kems.net [80.184.31.1] 3 7 ms 7 ms 8 ms 168.187.0.226 4 7 ms 8 ms 9 ms 168.187.0.125 5 180 ms 187 ms 188 ms if-11-2.core1.RSD-Riyad.as6453.net [116.0.78.89] 6 209 ms 222 ms 204 ms 195.219.167.57 7 541 ms 536 ms 540 ms 195.219.167.42 8 553 ms 552 ms 538 ms Vlan1102.icore1.PVU-Paris.as6453.net [195.219.24 1.109] 9 547 ms 543 ms 542 ms xe-9-1-0.edge4.paris1.level3.net [4.68.110.213] 10 540 ms 523 ms 531 ms ae-33-51.ebr1.Paris1.Level3.net [4.69.139.193] 11 755 ms 761 ms 695 ms ae-45-45.ebr1.London1.Level3.net [4.69.143.101] 12 271 ms 263 ms 400 ms ae-11-51.car1.London1.Level3.net [4.69.139.66] 13 701 ms 730 ms 742 ms 195.50.118.210 14 659 ms 641 ms 660 ms 209.85.255.76 15 280 ms 283 ms 292 ms 209.85.251.190 16 308 ms 293 ms 296 ms 72.14.232.239 17 679 ms 700 ms 721 ms 64.233.174.18 18 268 ms 281 ms 269 ms lm-in-f104.1e100.net [66.102.9.104] Trace complete. C:\Documents and Settings\Administrator> This was taken at 10:00am Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\Documents and Settings\Administrator>ping google.com Pinging google.com [66.102.9.106] with 32 bytes of data: Reply from 66.102.9.106: bytes=32 time=110ms TTL=49 Reply from 66.102.9.106: bytes=32 time=111ms TTL=49 Reply from 66.102.9.106: bytes=32 time=112ms TTL=49 Reply from 66.102.9.106: bytes=32 time=120ms TTL=49 Ping statistics for 66.102.9.106: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 110ms, Maximum = 120ms, Average = 113ms C:\Documents and Settings\Administrator>ping google.com /t Pinging google.com [66.102.9.106] with 32 bytes of data: Reply from 66.102.9.106: bytes=32 time=109ms TTL=49 Reply from 66.102.9.106: bytes=32 time=110ms TTL=49 Reply from 66.102.9.106: bytes=32 time=111ms TTL=49 Reply from 66.102.9.106: bytes=32 time=111ms TTL=49 Reply from 66.102.9.106: bytes=32 time=112ms TTL=49 Reply from 66.102.9.106: bytes=32 time=112ms TTL=49 Reply from 66.102.9.106: bytes=32 time=116ms TTL=49 Reply from 66.102.9.106: bytes=32 time=110ms TTL=49 Reply from 66.102.9.106: bytes=32 time=109ms TTL=49 Reply from 66.102.9.106: bytes=32 time=110ms TTL=49 Reply from 66.102.9.106: bytes=32 time=109ms TTL=49 Reply from 66.102.9.106: bytes=32 time=110ms TTL=49 Reply from 66.102.9.106: bytes=32 time=112ms TTL=49 Reply from 66.102.9.106: bytes=32 time=109ms TTL=49 Reply from 66.102.9.106: bytes=32 time=110ms TTL=49 Reply from 66.102.9.106: bytes=32 time=115ms TTL=49 Reply from 66.102.9.106: bytes=32 time=110ms TTL=49 Reply from 66.102.9.106: bytes=32 time=109ms TTL=49 Reply from 66.102.9.106: bytes=32 time=110ms TTL=49 Reply from 66.102.9.106: bytes=32 time=113ms TTL=49 Reply from 66.102.9.106: bytes=32 time=115ms TTL=49 Reply from 66.102.9.106: bytes=32 time=109ms TTL=49 Reply from 66.102.9.106: bytes=32 time=110ms TTL=49 Ping statistics for 66.102.9.106: Packets: Sent = 32, Received = 32, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 109ms, Maximum = 135ms, Average = 112ms Control-C ^C C:\Documents and Settings\Administrator>tracert google.com Tracing route to google.com [66.102.9.104] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 192.168.0.1 2 6 ms 6 ms 6 ms 80-184-31-1.adsl.kems.net [80.184.31.1] 3 8 ms 7 ms 6 ms 168.187.0.226 4 6 ms 7 ms 7 ms 168.187.0.125 5 20 ms 20 ms 18 ms if-11-2.core1.RSD-Riyad.as6453.net [116.0.78.89] 6 171 ms 205 ms 215 ms 195.219.167.57 7 191 ms 215 ms 226 ms 195.219.167.42 8 * 103 ms 94 ms Vlan1102.icore1.PVU-Paris.as6453.net [195.219.24 1.109] 9 94 ms 95 ms 97 ms xe-9-1-0.edge4.paris1.level3.net [4.68.110.213] 10 94 ms 94 ms 94 ms ae-33-51.ebr1.Paris1.Level3.net [4.69.139.193] 11 101 ms 101 ms 101 ms ae-48-48.ebr1.London1.Level3.net [4.69.143.113] 12 102 ms 102 ms 101 ms ae-11-51.car1.London1.Level3.net [4.69.139.66] 13 103 ms 102 ms 103 ms 195.50.118.210 14 137 ms 103 ms 100 ms 209.85.255.76 15 130 ms 124 ms 124 ms 209.85.251.190 16 114 ms 116 ms 116 ms 72.14.232.239 17 135 ms 113 ms 126 ms 64.233.174.18 18 126 ms 125 ms 127 ms lm-in-f104.1e100.net [66.102.9.104] Trace complete. C:\Documents and Settings\Administrator>

    Read the article

  • Conflit between AVAudioRecorder and AVAudioPlayer

    - by John
    Hi, here is my problem : The code (FooController) : NSString *path = [[NSBundle mainBundle] pathForResource:@"mySound" ofType:@"m4v"]; soundEffect = [[AVAudioPlayer alloc] initWithContentsOfURL:[NSURL fileURLWithPath:path] error:NULL]; [soundEffect play]; // MicBlow micBlow = [[MicBlowController alloc]init]; And MicBlowController contains : NSURL *url = [NSURL fileURLWithPath:@"/dev/null"]; NSDictionary *settings = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithFloat: 44100.0], AVSampleRateKey, [NSNumber numberWithInt: kAudioFormatAppleLossless], AVFormatIDKey, [NSNumber numberWithInt: 1], AVNumberOfChannelsKey, [NSNumber numberWithInt: AVAudioQualityMax], AVEncoderAudioQualityKey, nil]; and [recorder updateMeters]; const double ALPHA = 0.05; double peakPowerForChannel = pow(10,(0.05*[recorder peakPowerForChannel:0])); lowPassResults = ALPHA * peakPowerForChannel + (1.0 - ALPHA) * lowPassResults; NSLog(@"Average input: %f Peak input %f Low pass results: %f",[recorder averagePowerForChannel:0],[recorder peakPowerForChannel:0],lowPassResults); If I play the background sound and try to get the peak from the mic I get this log : Average input: 0.000000 Peak input -120.000000 Low pass results: 0.000001 But if I comment all parts about AVAudioPlayer it works. I think there is a problem of channel. Thanks

    Read the article

  • Monitoring disk performance with MRTG

    - by Ghostrider
    I use MRTG to monitor vital stats on my servers like disk space, CPU load, memory usage, temperatures etc. It all works fine and well for parameters that don't change rapidly. By running small VB script I can also get any Performance Counter. However these scripts are called by MRTG every 5 minutes while performance counters like physical disk idle time return a snapshot value from previous few seconds so a lot or data is missed. Surely I could write a service that would poll all required counters in background and store average values somewhere on disk where MRTG would pick them up. However before I do so I would like to find out if there is some ready solution that would allow me to get average value of some counter for the last 5 minutes as opposed to immediate snapshot.

    Read the article

  • python variable scope

    - by Oscar Reyes
    I'm teaching my self python and I was translating some sample class Student: def __init__( self, name, a,b,c ): self.name = name self.a = a self.b = b self.c = c def average(self): return ( a+b+c ) / 3.0 Which is pretty much my intended class definition Later in the main method I create an instance and call it a if __name__ == "__main__" : a = Student( "Oscar", 10, 10, 10 ) That's how I find out that the variable a declared in main is available to the method average and that to make that method work , I have to type self.a + self.b + self.c instead What's the rationale of this? I found related questions, but I don't really know if they are about the same

    Read the article

  • Getting Greyscale pixel value from RGB colourspace in Java using BufferedImage

    - by Andrew Bolster
    Anyone know of a simple way of converting the RGBint value returned from <BufferedImage> getRGB(i,j) into a greyscale value? I was going to simply average the RGB values by breaking them up using this; int alpha = (pixel >> 24) & 0xff; int red = (pixel >> 16) & 0xff; int green = (pixel >> 8) & 0xff; int blue = (pixel) & 0xff; and then average red,green,blue. But i feel like for such a simple operation I must be missing something...

    Read the article

  • Three coworkers Riddle Problem

    - by John S
    This isn't homework, I've got a solution, however it doesn't protect against cheaters. Three coworkers would like to know their average salary. However, they are self-conscious and don't want to tell each other their own salaries, for fear of either being ridiculed or getting their houses robbed. How can they find their average salary, without disclosing their own salaries? Now, a solution that requires the last person to tell the group the sum isn't allowed because that person could cheat. Solution: http://karavi.wordpress.com/2009/12/18/solutions-to-wu%E2%80%99s-puzzles-and-riddles-ghetto-encryption-2-medium/

    Read the article

  • Filtering MySQL query result according to a interval of timestamp

    - by celalo
    Let's say I have a very large MySQL table with a timestamp field. So I want to filter out some of the results not to have too many rows because I am going to print them. Let's say the timestamps are increasing as the number of rows increase and they are like every one minute on average. (Does not necessarily to be exactly once every minute, ex: 2010-06-07 03:55:14, 2010-06-07 03:56:23, 2010-06-07 03:57:01, 2010-06-07 03:57:51, 2010-06-07 03:59:21 ...) As I mentioned earlier I want to filter out some of the records, I do not have specific rule to do that, but I was thinking to filter out the rows according to the timestamp interval. After I achieve filtering I want to have a result set which has a certain amount of minutes between timestamps on average (ex: 2010-06-07 03:20:14, 2010-06-07 03:29:23, 2010-06-07 03:38:01, 2010-06-07 03:49:51, 2010-06-07 03:59:21 ...) Last but not least, the operation should not take incredible amount of time, I need this functionality to be almost fast as a normal select operation. Do you have any suggestions?

    Read the article

  • Are there any worse sorting algorithms than Bogosort (a.k.a Monkey Sort)?

    - by womp
    My co-workers took me back in time to my University days with a discussion of sorting algorithms this morning. We reminisced about our favorites like StupidSort, and one of us was sure we had seen a sort algorithm that was O(n!). That got me started looking around for the "worst" sorting algorithms I could find. We postulated that a completely random sort would be pretty bad (i.e. randomize the elements - is it in order? no? randomize again), and I looked around and found out that it's apparently called BogoSort, or Monkey Sort, or sometimes just Random Sort. Monkey Sort appears to have a worst case performance of O(∞), a best case performance of O(n), and an average performance of O(n * n!). Are there any named algorithms that have worse average performance than O(n * n!)? Or are just sillier than Monkey Sort in general?

    Read the article

  • Database Network Latency

    - by Karl
    Hi All, I am currently working on an n-tier system and battling some database performance issues. One area we have been investigating is the latency between the database server and the application server. In our test environment the average ping times between the two boxes is in the region of 0.2ms however on the clients site its more in the region of 8.2 ms. Is that somthing we should be worried about? For your average system what do you guys consider a resonable latency and how would you go about testing/measuring the latency? Karl

    Read the article

  • Is there any performance difference between Debug and Release?

    - by Lazlo
    I'm using MySql Connector .NET to load an account and transfer it to the client. This operation is rather intensive, considering the child elements of the account to load. In Debug mode, it takes, at most, 1 second to load the account. The average would be 500ms. In Release mode, it takes from 1 to 4 seconds to load the account. The average would be 1500ms. Since there is no #if DEBUG directive or the like in my code, I'm wondering where the difference is coming from. Is there a project build option I could change? Or does it have to do with MySql Connector .NET that would have different behaviors depending on the build mode?

    Read the article

  • SQL select statement with increment

    - by Matt
    Currently I'm using a for statement in PHP to get all the months for this SQL statement, but would like to know if I can do it all with SQL. Basically I have to get the average listing price, and the average selling price for each month going back 1 year where the sellingdate = the month. simple with PHP, but that creates 12 database hits. I'm trying the sql statment below, but it returns listings totally out of order SELECT avg(ListingPrice), avg(SellingPrice), count(ListingDate), DATE(SellingDate) as date, MONTH(SellingDate) as month, YEAR(SellingDate) as year FROM `rets_property_resi` WHERE Area = '5030' AND Status = 'S' AND SellingDate Output: 867507.142857 877632.492063 63 1996-12-24 12 1996 971355.833333 981533.333333 60 1997-11-18 11 1997 949334.328358 985453.731343 67 1997-10-23 10 1997 794150.000000 806642.857143 70 1996-09-20 9 1996 968371.621622 988074.702703 74 1997-08-21 8 1997 1033413.366337 1053018.534653 101 1997-07-30 7 1997 936115.054795 962787.671233 73 1996-06-07 6 1996 875378.735632 906921.839080 87 1996-05-16 5 1996 926635.616438 958561.643836 73 2010-04-13 4 2010 1030224.472222 1046332.291667 72 2010-03-31 3 2010 921711.458333 924177.083333 48 1997-02-28 2 1997 799484.615385 791551.282051 39 1997-01-15 1 1997 As you can see, it pulls from random dates, I need to to pull from 2010-03, 2010-02, 2010-01, etc... any help is appreciated!

    Read the article

  • Math: How to sum each row of a matrix

    - by macek
    I have a 1x8 matrix of students where each student is a 4x1 matrix of scores. Something like: SCORES S [62, 91, 74, 14] T [59, 7 , 59, 21] U [44, 9 , 69, 6 ] D [4 , 32, 28, 53] E [78, 99, 53, 83] N [48, 86, 89, 60] T [56, 71, 15, 80] S [47, 67, 79, 40] Main question: Using sigma notation, or some other mathematical function, how can I get a 1x8 matrix where each student's scores are summed? # expected result TOTAL OF SCORES S [241] T [146] U [128] D [117] E [313] N [283] T [222] S [233] Sub question. To get the average, I will multiply the matrix by 1/4. Would there be a quicker way to get the final result? AVERAGE SCORE S [60.25] T [36.50] U [32.00] D [29.25] E [78.25] N [70.75] T [55.50] S [58.25] Note: I'm not looking for programming-related algorithms here. I want to know if it is possible to represent this with pure mathematical functions alone.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >