Search Results

Search found 8 results on 1 pages for 'theodore enderby'.

Page 1/1 | 1 

  • Rotations and Origins

    - by Theodore Enderby
    I was hoping someone could explain to me, or help me understand, the math behind rotations and origins. I'm working on a little top down space sim and I can rotate my ship just how I want it. Now, when I get my blasters going it'd be nice if they shared the same rotation. Here's a picture. and here's some code! blast.X = ship.X+5; blast.Y = ship.Y; blast.RotationAngle = ship.RotationAngle; blast.Origin = new Vector2(ship.Origin.X,ship.Origin.Y); I add five so the sprite adds up when facing right. I tried adding five to the blast origin but no go. Any help is much appreciated

    Read the article

  • Linux : le Kernel a un bug "Lance Armstrong" dans son système de fichiers Ext4 qui entraine corruption et pertes de données

    Kernel Linux : le bug "Lance Armstrong" dans le système de fichiers Ext4 entraine la perte de données Theodore Ts'o, un développeur du noyau Linux vient de publier des détails sur un bug grave dans le noyau Linux. Le bug a été découvert par un utilisateur lors d'une mise à jour du noyau de la version 3.6.1 vers la version 3.6.3, qui a entrainé la corruption et la perte de ses données. Le problème a été qualifié - avec une marque d'humour - de « bug Lance Armstrong » par Theodore Ts'o, en référence au célèbre cycliste déchu pour dopage, du fait du passage de tous les tests de débogage, pourtant il...

    Read the article

  • Selecting only the entries that have a distinct combination of values?

    - by Theodore E O'Neal
    I have a table, links1, that has the columns headers CardID and AbilityID, that looks like this: CardID | AbilityID 1001 | 1 1001 | 2 1001 | 3 1002 | 2 1002 | 3 1002 | 4 1003 | 3 1003 | 4 1003 | 5 What I want is to be able to return all the CardID that that have two specific AbilityID. For example: If I choose 1 and 2, it returns 1001. If I choose 3 and 4, it returns 1002 and 1003. Is it possible to do this with only one table, or will I need to create an identical table and do an INNER JOIN on those?

    Read the article

  • C++ Tutorial 1: Introduction to programming in C++

    As the worlds increasing use of computer technology in the digital era continues apace, it becomes ever more vital for engineers and developers to have a better understanding of the technical process... [Author: Theodore Odeluga - Computers and Internet - April 05, 2010]

    Read the article

  • Data Quality and Master Data Management Resources

    - by Dejan Sarka
    Many companies or organizations do regular data cleansing. When you cleanse the data, the data quality goes up to some higher level. The data quality level is determined by the amount of work invested in the cleansing. As time passes, the data quality deteriorates, and you need to repeat the cleansing process. If you spend an equal amount of effort as you did with the previous cleansing, you can expect the same level of data quality as you had after the previous cleansing. And then the data quality deteriorates over time again, and the cleansing process starts over and over again. The idea of Data Quality Services is to mitigate the cleansing process. While the amount of time you need to spend on cleansing decreases, you will achieve higher and higher levels of data quality. While cleansing, you learn what types of errors to expect, discover error patterns, find domains of correct values, etc. You don’t throw away this knowledge. You store it and use it to find and correct the same issues automatically during your next cleansing process. The following figure shows this graphically. The idea of master data management, which you can perform with Master Data Services (MDS), is to prevent data quality from deteriorating. Once you reach a particular quality level, the MDS application—together with the defined policies, people, and master data management processes—allow you to maintain this level permanently. This idea is shown in the following picture. OK, now you know what DQS and MDS are about. You can imagine the importance on maintaining the data quality. Here are some resources that help you preparing and executing the data quality (DQ) and master data management (MDM) activities. Books Dejan Sarka and Davide Mauri: Data Quality and Master Data Management with Microsoft SQL Server 2008 R2 – a general introduction to MDM, MDS, and data profiling. Matching explained in depth. Dejan Sarka, Matija Lah and Grega Jerkic: MCTS Self-Paced Training Kit (Exam 70-463): Building Data Warehouses with Microsoft SQL Server 2012 – I wrote quite a few chapters about DQ and MDM, and introduced also SQL Server 2012 DQS. Thomas Redman: Data Quality: The Field Guide – you should start with this book. Thomas Redman is the father of DQ and MDM. Tyler Graham: Microsoft SQL Server 2012 Master Data Services – MDS in depth from a product team mate. Arkady Maydanchik: Data Quality Assessment – data profiling in depth. Tamraparni Dasu, Theodore Johnson: Exploratory Data Mining and Data Cleaning – advanced data profiling with data mining. Forthcoming presentations I am presenting a DQS and MDM seminar at PASS SQL Rally Amsterdam 2013: Wednesday, November 6th, 2013: Enterprise Information Management with SQL Server 2012 – a good kick start to your first DQ and / or MDM project. Courses Data Quality and Master Data Management with SQL Server 2012 – I wrote a 2-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. Start improving the quality of your data now!

    Read the article

  • Going by the eBook

    - by Tony Davis
    The book and magazine publishing world is rapidly going digital, and the industry is faced with making drastic changes to their ways of doing business. The sudden take-up of digital readers by the book-buying public has surprised even the most technological-savvy of the industry. Printed books just aren't selling like they did. In contrast, eBooks are doing well. The ePub file format is the standard around which all publishers are converging. ePub is a standard for formatting book content, so that it can be reflowed for various devices, with their widely differing screen-sizes, and can be read offline. If you unzip an ePub file, you'll find familiar formats such as XML, XHTML and CSS. This is both a blessing and a curse. Whilst it is good to be able to use familiar technologies that have been developed to a level of considerable sophistication, it doesn't get us all the way to producing a viable publication. XHTML is a page-description language, not a book-description language, as we soon found out during our initial experiments, when trying to specify headers, footers, indexes and chaptering. As a result, it is difficult to predict how any particular eBook application will decide to render a book. There isn't even a consensus as to how the cover image is specified. All of this is awkward for the publisher. Each book must be created and revised in a form from which can be generated a whole range of 'printed media', from print books, to Mobi for kindles, ePub for most Tablets and SmartPhones, HTML for excerpted chapters on websites, and a plethora of other formats for other eBook readers, each with its own idiosyncrasies. In theory, if we can get our content into a clean, semantic XML form, such as DOCBOOKS, we can, from there, after every revision, perform a series of relatively simple XSLT transformations to output anything from a HTML article, to an ePub file for reading on an iPad, to an ICML file (an XML-based file format supported by the InDesign tool), ready for print publication. As always, however, the task looks bigger the closer you get to the detail. On the way to the utopian world of an XML-based book format that encompasses all the diverse requirements of the different publication media, ePub looks like a reasonable format to adopt. Its forthcoming support for HTML 5 and CSS 3, with ePub 3.0, means that features, such as widow-and-orphan controls, multi-column flow and multi-media graphics can be incorporated into eBooks. This starts to make it possible to build an "app-like" experience into the eBook and to free publishers to think of putting context before container; to think of what content is required, be it graphical, textual or audio, from the point of view of the user, rather than what's possible in a given, traditional book "Container". In the meantime, there is a gap between what publishers require and what current technology can provide and, of course building this app-like experience is far from plain sailing. Real portability between devices is still a big challenge, and achieving the sort of wizardry seen in the likes of Theodore Grey's "Elements" eBook will require some serious device-specific programming skills. Cheers, Tony.

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • Remove/squash entries in a vertical hash

    - by Forkrul Assail
    I have a grid that represents an X, Y matrix, stored as a hash here. Some points on the X Y matrix may have values (as type string), and some may not. A typical grid could look like this: {[9, 5]=>"Alaina", [10, 3]=>"Courtney", [11, 1]=>"Gladys", [8, 7]=>"Alford", [14, 11]=>"Lesley", [17, 2]=>"Lawson", [0, 5]=>"Katrine", [2, 1]=>"Tyra", [3, 3]=>"Fredy", [1, 7]=>"Magnus", [6, 9]=>"Nels", [7, 11]=>"Kylie", [11, 0]=>"Kellen", [10, 2]=>"Johan", [14, 10]=>"Justice", [0, 4]=>"Barton", [2, 0]=>"Charley", [3, 2]=>"Magnolia", [1, 6]=>"Maximo", [7, 10]=>"Olga", [19, 5]=>"Isadore", [16, 3]=>"Delfina", [17, 1]=>"Noe", [20, 11]=>"Francis", [10, 5]=>"Creola", [9, 3]=>"Bulah", [8, 1]=>"Lempi", [11, 7]=>"Raquel", [13, 11]=>"Jace", [1, 5]=>"Garth", [3, 1]=>"Ernest", [2, 3]=>"Malcolm", [0, 7]=>"Alejandrin", [7, 9]=>"Marina", [6, 11]=>"Otilia", [16, 2]=>"Hailey", [20, 10]=>"Brandt", [8, 0]=>"Madeline", [9, 2]=>"Leanne", [13, 10]=>"Jenifer", [1, 4]=>"Humberto", [3, 0]=>"Nicholaus", [2, 2]=>"Nadia", [0, 6]=>"Abigail", [6, 10]=>"Zola", [20, 5]=>"Clementina", [23, 3]=>"Alvah", [19, 11]=>"Wallace", [11, 5]=>"Tracey", [8, 3]=>"Hulda", [9, 1]=>"Jedidiah", [10, 7]=>"Annetta", [12, 11]=>"Nicole", [2, 5]=>"Alison", [0, 1]=>"Wilma", [1, 3]=>"Shana", [3, 7]=>"Judd", [4, 9]=>"Lucio", [5, 11]=>"Hardy", [19, 10]=>"Immanuel", [9, 0]=>"Uriel", [8, 2]=>"Milton", [12, 10]=>"Elody", [5, 10]=>"Alexanne", [1, 2]=>"Lauretta", [0, 0]=>"Louvenia", [2, 4]=>"Adelia", [21, 5]=>"Erling", [18, 11]=>"Corene", [22, 3]=>"Haskell", [11, 11]=>"Leta", [10, 9]=>"Terrence", [14, 1]=>"Giuseppe", [15, 3]=>"Silas", [12, 5]=>"Johnnie", [4, 11]=>"Aurelie", [5, 9]=>"Meggie", [2, 7]=>"Phoebe", [0, 3]=>"Sister", [1, 1]=>"Violet", [3, 5]=>"Lilian", [18, 10]=>"Eusebio", [11, 10]=>"Emma", [15, 2]=>"Theodore", [14, 0]=>"Cassidy", [4, 10]=>"Edmund", [2, 6]=>"Claire", [0, 2]=>"Madisen", [1, 0]=>"Kasey", [3, 4]=>"Elijah", [17, 11]=>"Susana", [20, 1]=>"Nicklaus", [21, 3]=>"Kelsie", [10, 11]=>"Garnett", [11, 9]=>"Emanuel", [15, 1]=>"Louvenia", [14, 3]=>"Otho", [13, 5]=>"Vincenza", [3, 11]=>"Tate", [2, 9]=>"Beau", [5, 7]=>"Jason", [6, 1]=>"Jayde", [7, 3]=>"Lamont", [4, 5]=>"Curt", [17, 10]=>"Mack", [21, 2]=>"Lilyan", [10, 10]=>"Ruthe", [14, 2]=>"Georgianna", [4, 4]=>"Nyasia", [6, 0]=>"Sadie", [16, 11]=>"Emil", [21, 1]=>"Melba", [20, 3]=>"Delia", [3, 10]=>"Rosalee", [2, 8]=>"Myrtle", [7, 2]=>"Rigoberto", [14, 5]=>"Jedidiah", [13, 3]=>"Flavie", [12, 1]=>"Evie", [8, 9]=>"Olaf", [9, 11]=>"Stan", [20, 2]=>"Judge", [5, 5]=>"Cassie", [7, 1]=>"Gracie", [6, 3]=>"Armando", [4, 7]=>"Delia", [3, 9]=>"Marley", [16, 10]=>"Robyn", [2, 11]=>"Richie", [12, 0]=>"Gilberto", [13, 2]=>"Dedrick", [9, 10]=>"Liam", [5, 4]=>"Jabari", [7, 0]=>"Enola", [6, 2]=>"Lela", [3, 8]=>"Jade", [2, 10]=>"Johnson", [15, 5]=>"Willow", [12, 3]=>"Fredrick", [13, 1]=>"Beau", [9, 9]=>"Carlie", [8, 11]=>"Daisha", [6, 5]=>"Declan", [4, 1]=>"Carolina", [5, 3]=>"Cruz", [7, 7]=>"Jaime", [0, 9]=>"Anthony", [1, 11]=>"Esta", [13, 0]=>"Shaina", [12, 2]=>"Alec", [8, 10]=>"Lora", [6, 4]=>"Emely", [4, 0]=>"Rodger", [5, 2]=>"Cedrick", [0, 8]=>"Collin", [1, 10]=>"Armani", [16, 5]=>"Brooks", [19, 3]=>"Eleanora", [18, 1]=>"Alva", [7, 5]=>"Melissa", [5, 1]=>"Tabitha", [4, 3]=>"Aniya", [6, 7]=>"Marc", [1, 9]=>"Marjorie", [0, 11]=>"Arvilla", [19, 2]=>"Adela", [7, 4]=>"Zakary", [5, 0]=>"Emely", [4, 2]=>"Alison", [1, 8]=>"Lorenz", [0, 10]=>"Lisandro", [17, 5]=>"Aylin", [18, 3]=>"Giles", [19, 1]=>"Kyleigh", [8, 5]=>"Mary", [11, 3]=>"Claire", [10, 1]=>"Avis", [9, 7]=>"Manuela", [15, 11]=>"Chesley", [18, 2]=>"Kristopher", [24, 3]=>"Zola", [8, 4]=>"Pietro", [10, 0]=>"Delores", [11, 2]=>"Timmy", [15, 10]=>"Khalil", [18, 5]=>"Trudie", [17, 3]=>"Rafael", [16, 1]=>"Anthony"} What I need to do though, is basically remove all the empty entries. Let's say [17,3] = Raphael does not have an element in front of if (let's say - no [16,3] exists) then [17,3] should become [16,3] etc. So basically all empty items will be popped off the vertical (row) structure of the hash. Are there functions I should have a look at or is there an easy squash-like method that would just remove blanks and adjust and move other items? Thanks in advance for your help.

    Read the article

1