Search Results

Search found 69357 results on 2775 pages for 'data oriented design'.

Page 181/2775 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • RPi and Java Embedded GPIO: Big Data and Java Technology

    - by hinkmond
    Java Embedded and Big Data go hand-in-hand, especially as demonstrated by prototyping on a Raspberry Pi to show how well the Java Embedded platform can perform on a small embedded device which then becomes the proof-of-concept for industrial controllers, medical equipment, networking gear or any type of sensor-connected device generating large amounts of data. The key is a fast and reliable way to access that data using Java technology. In the previous blog posts you've seen the integration of a static electricity sensor and the Raspberry Pi through the GPIO port, then accessing that data through Java Embedded code. It's important to point out how this works and why it works well with Java code. First, the version of Linux (Debian Wheezy/Raspian) that is found on the RPi has a very convenient way to access the GPIO ports through the use of Linux OS managed file handles. This is key in avoiding terrible and complex coding using register manipulation in C code, or having to program in a less elegant and clumsy procedural scripting language such as python. Instead, using Java Embedded, allows a fast way to access those GPIO ports through those same Linux file handles. Java already has a very easy to program way to access file handles with a high degree of performance that matches direct access of those file handles with the Linux OS. Using the Java API java.io.FileWriter lets us open the same file handles that the Linux OS has for accessing the GPIO ports. Then, by first resetting the ports using the unexport and export file handles, we can initialize them for easy use in a Java app. // Open file handles to GPIO port unexport and export controls FileWriter unexportFile = new FileWriter("/sys/class/gpio/unexport"); FileWriter exportFile = new FileWriter("/sys/class/gpio/export"); ... // Reset the port unexportFile.write(gpioChannel); unexportFile.flush(); // Set the port for use exportFile.write(gpioChannel); exportFile.flush(); Then, another set of file handles can be used by the Java app to control the direction of the GPIO port by writing either "in" or "out" to the direction file handle. // Open file handle to input/output direction control of port FileWriter directionFile = new FileWriter("/sys/class/gpio/gpio" + gpioChannel + "/direction"); // Set port for input directionFile.write("in"); // Or, use "out" for output directionFile.flush(); And, finally, a RandomAccessFile handle can be used with a high degree of performance on par with native C code (only milliseconds to read in data and write out data) with low overhead (unlike python) to manipulate the data going in and out on the GPIO port, while the object-oriented nature of Java programming allows for an easy way to construct complex analytic software around that data access functionality to the external world. RandomAccessFile[] raf = new RandomAccessFile[GpioChannels.length]; ... // Reset file seek pointer to read latest value of GPIO port raf[channum].seek(0); raf[channum].read(inBytes); inLine = new String(inBytes); It's Big Data from sensors and industrial/medical/networking equipment meeting complex analytical software on a small constraint device (like a Linux/ARM RPi) where Java Embedded allows you to shine as an Embedded Device Software Designer. Hinkmond

    Read the article

  • SSIS Debugging Tip: Using Data Viewers

    - by Jim Giercyk
    When you have an SSIS package error, it is often very helpful to see the data records that are causing the problem.  After all, if your input has 50,000 records and 1 of them has corrupt data, it can be a chore.  Your execution results will tell you which column contains the bad data, but not which record…..enter the Data Viewer. In this scenario I have created a truncation error.  The input length of [lastname] is 50, but the output table has a length of 15.  When it runs, at least one of the records causes the package to fail.     Now what?  We can tell from our execution results that there is a problem with [lastname], but we have no idea WHICH record?     Let’s identify the row that is actually causing the problem.  First, we grab the oft’ forgotten Row Count shape from our toolbar and connect it to the error output from our input query.  Remember that in order to intercept errors with the error output, you must redirect them.     The Row Count shape requires 1 integer variable.  For our purposes, we will not reference the variable, but it is still required in order for the package to run.  Typically we would use the variable to hold the number of rows in the table and refer back to it later in our process.  We are simply using the Row Count as a “Dead End” for errors.  I called my variable RowCounter.  To create a variable, with no shapes selected, right-click on the background and choose Variable.     Once we have setup the Row Count shape, we can right-click on the red line (error output) from the query, and select Data Viewers.  In the popup, we click the add button and we will see this:     There are other fancier options we can play with, but for now we just want to view the output in a grid.  WE select Grid, then click OK on all of the popup windows to shut them down.  We should now see a grid with a pair of glasses on the error output line.     So, we are ready to catch the error output in a grid and see that is causing the problem!  This time when we run the package, it does not fail because we directed the error to the Row Count.  We also get a popup window showing the error record in a grid.  If there were multiple errors we would see them all.     Indeed, the [lastname] column is longer than 15 characters.  Notice the last column in the grid, [Error Code – Description].  We knew this was a truncation error before we added the grid, but if you have worked with SSIS for any length of time, you know that some errors are much more obscure.  The description column can be very useful under those circumstances! Data viewers can be used any time we want to see the data that is actually in the pipeline;  they stop the package temporarily until we shut them.  Also remember that the Row Count shape can be used as a “Dead End”.  It is useful during development when we want to see the output from a dataflow, but don’t want to update a table or file with the dataData viewers are an invaluable tool for both development and debugging.  Just remember to REMOVE THEM before putting your package into production

    Read the article

  • Error in data view when connecting to an Oracle DB

    - by Mike Polen
    When using SharePoint Designer I found this link that stepped me through how to get it working: http://spsolution.blogspot.com/2008/12/how-to-insert-data-source-in-sharepoint.html That allowed SharePoint Designer to talk to Oracle, but when I placed a data view on a page it gave me the following error: Error while executing web part: System.Data.OracleClient.OracleException: ORA-00923: FROM keyword not found where expected at System.Data.OracleClient.OracleConnection.CheckError(OciErrorHandle errorHandle, Int32 rc) at System.Data.OracleClient.OracleCommand.Execute(OciStatementHandle statementHandle, CommandBehavior behavior, Boolean needRowid, OciRowidDescriptor& rowidDescriptor, ArrayList& resultParameterOrdinals) at System.Data.OracleClient.OracleCommand.Execute(OciStatementHandle statementHandle, CommandBehavior behavior, ArrayList& resultParameterOrdinals) at System.Data.OracleClient.OracleCommand.ExecuteReader(CommandBehavior behavior) at System.Data.OracleClient.OracleCommand.ExecuteDbDataReader(CommandBehavior behavior) at System.Data.Common.DbCommand.Syst... 09/14/2009 14:40:23.52* w3wp.exe (0x0FA0) 0x1A88 Windows SharePoint Services Web Parts 89a1 Monitorable ... em.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, String srcTable) at System.Web.UI.WebControls.SqlDataSourceView.ExecuteSelect(DataSourceSelectArguments arguments) at System.Web.UI.DataSourceView.Select(DataSourceSelectArguments arguments, DataSourceViewSelectCallback callback) at Microsoft.SharePoint.WebControls.SingleDataSource.GetXPathNavigatorInternal() ... 09/14/2009 14:40:23.52* w3wp.exe (0x0FA0) 0x1A88 Windows SharePoint Services Web Parts 89a1 Monitorable ... at Microsoft.SharePoint.WebControls.SingleDataSource.GetXPathNavigator() at Microsoft.SharePoint.WebControls.SingleDataSource.GetXPathNavigator(IDataSource datasource, Boolean originalData) at Microsoft.SharePoint.WebPartPages.DataFormWebPart.GetXPathNavigator(String viewPath) at Microsoft.SharePoint.WebPartPages.DataFormWebPart.PrepareAndPerformTransform() I am mystified.

    Read the article

  • Is it possible to have a wireless in-house NAS with wireless data transfer rates of equivalent to SATA speeds?

    - by techaddict
    Basically I would like to know, if it is possible to set up an NAS in my house to be accessed wirelessly, that can reach equivalent real-life data transfer speeds to USB 3.0 or an internal SATA hard drive. I have been wanting to do this for some time ( a couple of years now). Basically, this is what I want to do: Plug in a number of hard drives in an array, somewhere in my house, to be left plugged in and never have to be monitored. Ideally several terabytes. Whenever I am home, to have my computer and laptop configured to automatically find the NAS, as easy as plugging in an external hard drive - except completely wirelessly. Data transfer needs to be as seamless and quick as having added another internal hard drive in my laptop. Moreover, data should be able to accessed without having to copy it over - I should be able to wirelessly access the NAS and browse files, and open files directly from the NAS. For example, say I wanted to open a video - I should be able to play the video that is located on the NAS, directly from the NAS, completely wirelessly. If I wanted to open a .pdf file, I should be able to open it and read it directly from the NAS, as if it were located on my physical internal hard drive. Cost is important as well. Please tell me what equipment I need for this to be possible. I know you geniuses out there who can tell me if this is possible.

    Read the article

  • zend gdata google base getting link url and parsing xml with object oriented php

    - by thrice801
    Hi, So I am using Zend frameworks Gdata extension to parse xml data from google base. http://framework.zend.com/manual/en/zend.gdata.gbase.html Working fine minus one damn important thing I cant seem to parse, the link to the actual page, because there are two elements and its returned as an array. Anyways Ive tried vardumping the array but what comes out didnt help me figure out what was going on any better. So heres the specifics of the issue. echo $entry->title->text."<br />".$entry->id->text."<br />".$entry->content->text."<br />".$entry->author[0]->name."<br />"; (the above code works fine). Now according to the documentation I thought I could get the link href by going $entry-link-text or $entry-link-url (tried both, neither worked). Anyways, Im not sure whether this is zend specific or just a gap in my OOP understanding, but any help would be much appreciated!

    Read the article

  • SQLAuthority News – List of Master Data Services White Paper

    - by pinaldave
    Since my TechEd India 2010 presentation I am very excited with SQL Server 2010 MDS. I just come across very interesting white paper on Microsoft site related to this subject. Here is the list of the same and location where you can download them. They are all written by Top Experts at Microsoft. Master Data Management from a Business Perspective - Download a PDF version or an XPS version Master Data Management from a Technical Perspective - Download a PDF version or an XPS version Bringing Master Data Management to the Stakeholders - Download a PDF version or an XPS version Implementing a Phased Approach to Master Data Management - Download a PDF version or an XPS version SharePoint Workflow Integration with Master Data Services - Read it here. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL

    Read the article

  • Offloading (Some) EBS 12 Reporting to Active Data Guard Instances

    - by Steven Chan
    For most Oracle Database users, Oracle Active Data Guard allows users to:Create a physical standby database for business continuity and disaster recoveryOffload reporting from the production database to the read-only physical standby databaseE-Business Suite customers have been able to use Active Data Guard to create physical standby databases for their EBS environments since the feature was introduced with the 11g Database.  EBS sysadmins can use the generic Active Data Guard documentation to take advantage of the Active Data Guard standby database capabilities.  I am pleased to announce that it is now possible to offload a subset of some ReportWriter-based reports -- but not all -- from a production EBS environment to an Active Data Guard physical standby database.  But before I go into the details of this newly-certified configuration, it's necessary to understand some details about what happens whenever someone attempts to access the E-Business Suite.

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2

    - by rajbk
    In the previous post, you saw how to create an OData feed and pre-filter the data. In this post, we will see how to shape the data. A sample project is attached at the bottom of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1 Shaping the feed The Product feed we created earlier returns too much information about our products. Let’s change this so that only the following properties are returned – ProductID, ProductName, QuantityPerUnit, UnitPrice, UnitsInStock. We also want to return only Products that are not discontinued.  Splitting the Entity To shape our data according to the requirements above, we are going to split our Product Entity into two and expose one through the feed. The exposed entity will contain only the properties listed above. We will use the other Entity in our Query Interceptor to pre-filter the data so that discontinued products are not returned. Go to the design surface for the Entity Model and make a copy of the Product entity. A “Product1” Entity gets created.   Rename Product1 to ProductDetail. Right click on the Product entity and select “Add Association” Make a one to one association between Product and ProductDetails.   Keep only the properties we wish to expose on the Product entity and delete all other properties on it (see diagram below). You delete a property on an Entity by right clicking on the property and selecting “delete”. Keep the ProductID on the ProductDetail. Delete any other property on the ProductDetail entity that is already present in the Product entity. Your design surface should look like below:    Mapping Entity to Database Tables Right click on “ProductDetail” and go to “Table Mapping”   Add a mapping to the “Products” table in the Mapping Details.   After mapping ProductDetail, you should see the following.   Add a referential constraint. Lets add a referential constraint which is similar to a referential integrity constraint in SQL. Double click on the Association between the Entities and add the constraint with “Principal” set to “Product”. Let us review what we did so far. We made a copy of the Product entity and called it ProductDetail We created a one to one association between these entities Excluding the ProductID, we made sure properties were not duplicated between these entities  We added a ProductDetail entity to Products table mapping (Entity to Database). We added a referential constraint between the entities. Lets build our project. We get the following error: ”'NortwindODataFeed.Product' does not contain a definition for 'Discontinued' and no extension method 'Discontinued' accepting a first argument of type 'NortwindODataFeed.Product' could be found …" The reason for this error is because our Product Entity no longer has a “Discontinued” property. We “moved” it to the ProductDetail entity since we want our Product Entity to contain only properties that will be exposed by our feed. Since we have a one to one association between the entities, we can easily rewrite our Query Interceptor like so: [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.ProductDetail.Discontinued == false; } Similarly, all “hidden” properties of the Product table are available to us internally (through the ProductDetail Entity) for any additional logic we wish to implement. Compile the project and view the feed. We see that the feed returns only the properties that were part of the requirement.   To see the data in JSON format, you have to create a request with the following request header Accept: application/json, text/javascript, */* (easy to do in jQuery) The result should look like this: { "d" : { "results": [ { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(1)", "type": "NorthwindModel.Product" }, "ProductID": 1, "ProductName": "Chai", "QuantityPerUnit": "10 boxes x 20 bags", "UnitPrice": "18.0000", "UnitsInStock": 39 }, { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(2)", "type": "NorthwindModel.Product" }, "ProductID": 2, "ProductName": "Chang", "QuantityPerUnit": "24 - 12 oz bottles", "UnitPrice": "19.0000", "UnitsInStock": 17 }, { ... ... If anyone has the $format operation working, please post a comment. It was not working for me at the time of writing this.  We have successfully pre-filtered our data to expose only products that have not been discontinued and shaped our data so that only certain properties of the Entity are exposed. Note that there are several other ways you could implement this like creating a QueryView, Stored Procedure or DefiningQuery. You have seen how easy it is to create an OData feed, shape the data and pre-filter it by hardly writing any code of your own. For more details on OData, Google it with your favorite search engine :-) Also check out the one of the most passionate persons I have ever met, Pablo Castro – the Architect of Aristoria WCF Data Services. Watch his MIX 2010 presentation titled “OData: There's a Feed for That” here. Download Sample Project for VS 2010 RTM NortwindODataFeed.zip

    Read the article

  • SQL Server DATA Tools CTP4 Released!

    - by hassanfadili
    SQL Server team has released the new SQL Server Data Tools CTP4. Congratulations and Thanks to Gert Drapers and his team with this great milestone. To lear more about this SSDT CTP4 Release, check: What’s new in SQL Server Data Tools CTP4?http://blogs.msdn.com/b/ssdt/archive/2011/11/21/what-s-new-in-sql-server-data-tools-ctp4.aspxSQL Server Data Tools CTP4 vs. VS2010 Database Projectshttp://blogs.msdn.com/b/ssdt/archive/2011/11/21/sql-server-data-tools-ctp4-vs-vs2010-database-projects.aspxTop VSDB->SSDT Project Conversion Issueshttp://blogs.msdn.com/b/ssdt/archive/2011/11/21/top-vsdb-gt-ssdt-project-conversion issues.aspxUninstalling SQL Server Developer Tools CTP3 (Code-named “Juneau”) http://blogs.msdn.com/b/ssdt/archive/2011/11/21/uninstalling-ssdt-ctp3-code-named-juneau.aspxThis actually points to a nifty PowerShell script to help you uninstall.Have Fun.v

    Read the article

  • ODI 11g - Oracle Data Integrator 11g – A Hands-On Tutorial

    - by David Allan
    I've have been asked by Packt publishing to review a brand new book on Oracle Data Integrator: Getting Started with Oracle Data Integrator 11g – A Hands-On Tutorial. Waiting on this book to arrive and see what goodies are inside, I'll blog a review later. The book can be found at Oracle Data Integrator 11g – A Hands-On Tutorial Looking at the table of contents, it looks like it gives a good broad introduction (including various data formats) to the product; Chapter 1: Product Overview Chapter 2: Product Installation Chapter 3: Using Variables Chapter 4: ODI Sources, Targets, and Knowledge Modules Chapter 5: Working with Databases Chapter 6: Working with MySQL Chapter 7: Working with Microsoft SQL Server Chapter 8: Integrating File Data Chapter 9: Working with XML Files Chapter 10: Creating Workflows—Packages and Load Plans Chapter 11: Error Management Chapter 12: Managing and Monitoring ODI Components Chapter 13: Concluding Remarks Looking forward to it.

    Read the article

  • SSIS Snack: Data Flow Source Adapters

    - by andyleonard
    Introduction Configuring a Source Adapter in a Data Flow Task couples (binds) the Data Flow to an external schema. This has implications for dynamic data loads. "Why Can't I...?" I'm often asked a question similar to the following: "I have 17 flat files with different schemas that I want to load to the same destination database - how many Data Flow Tasks do I need?" I reply "17 different schemas? That's easy, you need 17 Data Flow Tasks." In his book Microsoft SQL Server 2005 Integration Services...(read more)

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • What are the best resources for learning about concurrency and multi-threaded applications?

    - by Zepee
    I realised I have a massive knowledge gap when it comes to multi-threaded applications and concurrent programming. I've covered some basics in the past, but most of it seems to be gone from my mind, and it is definitely a field that I want, and need, to be more knowledgeable about. What are the best resources for learning about building concurrent applications? I'm a very practical oriented person, so if said book contains concrete examples the better, but I'm open to suggestions. I personally prefer to work in pseudocode or C++, and a slant toward game development would be best, but not required.

    Read the article

  • Oracle Database 11g Helps Control Exponential Data Growth

    - by [email protected]
    The 2010 ESG annual customer survey is now available. As part of it, ESG interviewed 300 customers about their IT priorities and, unsurprisingly, "Manage Data Growth" is top of the list. Perhaps less self-evident is the proposed solution to target this prime concern: "Often overlooked because it is a database platform, Oracle Database 11g offers additional capabilities such as automatic storage management (ASM), advanced data compression, and data protection that make managing data growth much easier for organizations of any size." The paper goes on to discuss these capabilities and highlights their potential benefits. Oracle Database 11g Helps Control Exponential Database Growth - a worthwhile read for anyone having to deal with rapidly increasing amounts of data. Download your free copy here.

    Read the article

  • Does software testing methodology rely on flawed data?

    - by Konrad Rudolph
    It’s a well-known fact in software engineering that the cost of fixing a bug increases exponentially the later in development that bug is discovered. This is supported by data published in Code Complete and adapted in numerous other publications. However, it turns out that this data never existed. The data cited by Code Complete apparently does not show such a cost / development time correlation, and similar published tables only showed the correlation in some special cases and a flat curve in others (i.e. no increase in cost). Is there any independent data to corroborate or refute this? And if true (i.e. if there simply is no data to support this exponentially higher cost for late discovered bugs), how does this impact software development methodology?

    Read the article

  • How to manage primary key while updating [migrated]

    - by Subin Jacob
    In the following table primaryKeyColumn is primary key. To maintain the data history I always uses the values with WHERE condition(WHERE StatusColumn=1) And will set the StatusColumn to 0 if the data is edited (So that I could keep the previous data). But the problem is, if I update it to 0 , I can't insert the same key to primarykeycolumn since the column validated for primary keys. How can I manage these kind of validations? what the mistake I did in this design? primaryKeyColumn ValueColumn StatusColumn ---------------- ----------- ------------ 2 Name1 1 3 Name2 1 4 Name3 0

    Read the article

  • Writing use cases for XML mapping scenarios between two different systems

    - by deepak_prn
    I am having some trouble writing use cases for XML mapping after a certain trigger invoked by the system. For example, one of the scenarios goes: the store cashier sells an item, the transaction data is sent to Data management system. Now, I am writing a functional design for the scenario which deals with mapping XML fields between our system and the data management system. Question : I was wondering if some one had to deal with writing use cases or extension use cases for mapping XML fields between two systems? (There is no XSLT involved) and if you used a table to represent the fields mapping (example is below) or any other visualization tool which does not break the bank ? I searched many questions on SO and here but nothing came close to my requirement.

    Read the article

  • How should I make searching a relational database more efficient?

    - by Travis J
    This is in the scope of a web application. I have a database which has a few nested relations. There is a feature which depicts the history of a large chain of relations. It is essentially a data analysis feature. The issue is that in order to search, a large object graph must be loaded - the loading time for this object graph is not quick enough to be viable. The problem is that without loading the whole graph it makes searching from a single string nearly impossible. In order to search, explicit fields must be specified and the search data supplied. Is there a design pattern for exposing the data in a way which facilitates a single string search instead of having to explicitly define parameters?

    Read the article

  • ODI 12c - Aggregating Data

    - by David Allan
    This posting will look at the aggregation component that was introduced in ODI 12c. For many ETL tool users this shouldn't be a big surprise, its a little different than ODI 11g but for good reason. You can use this component for composing data with relational like operations such as sum, average and so forth. Also, Oracle SQL supports special functions called Analytic SQL functions, you can use a specially configured aggregation component or the expression component for these now in ODI 12c. In database systems an aggregate transformation is a transformation where the values of multiple rows are grouped together as input on certain criteria to form a single value of more significant meaning - that's exactly the purpose of the aggregate component. In the image below you can see the aggregate component in action within a mapping, for how this and a few other examples are built look at the ODI 12c Aggregation Viewlet here - the viewlet illustrates a simple aggregation being built and then some Oracle analytic SQL such as AVG(EMP.SAL) OVER (PARTITION BY EMP.DEPTNO) built using both the aggregate component and the expression component. In 11g you used to just write the aggregate expression directly on the target, this made life easy for some cases, but it wan't a very obvious gesture plus had other drawbacks with ordering of transformations (agg before join/lookup. after set and so forth) and supporting analytic SQL for example - there are a lot of postings from creative folks working around this in 11g - anything from customizing KMs, to bypassing aggregation analysis in the ODI code generator. The aggregate component has a few interesting aspects. 1. Firstly and foremost it defines the attributes projected from it - ODI automatically will perform the grouping all you do is define the aggregation expressions for those columns aggregated. In 12c you can control this automatic grouping behavior so that you get the code you desire, so you can indicate that an attribute should not be included in the group by, that's what I did in the analytic SQL example using the aggregate component. 2. The component has a few other properties of interest; it has a HAVING clause and a manual group by clause. The HAVING clause includes a predicate used to filter rows resulting from the GROUP BY clause. Because it acts on the results of the GROUP BY clause, aggregation functions can be used in the HAVING clause predicate, in 11g the filter was overloaded and used for both having clause and filter clause, this is no longer the case. If a filter is after an aggregate, it is after the aggregate (not sometimes after, sometimes having).  3. The manual group by clause let's you use special database grouping grammar if you need to. For example Oracle has a wealth of highly specialized grouping capabilities for data warehousing such as the CUBE function. If you want to use specialized functions like that you can manually define the code here. The example below shows the use of a manual group from an example in the Oracle database data warehousing guide where the SUM aggregate function is used along with the CUBE function in the group by clause. The SQL I am trying to generate looks like the following from the data warehousing guide; SELECT channel_desc, calendar_month_desc, countries.country_iso_code,       TO_CHAR(SUM(amount_sold), '9,999,999,999') SALES$ FROM sales, customers, times, channels, countries WHERE sales.time_id=times.time_id AND sales.cust_id=customers.cust_id AND   sales.channel_id= channels.channel_id  AND customers.country_id = countries.country_id  AND channels.channel_desc IN   ('Direct Sales', 'Internet') AND times.calendar_month_desc IN   ('2000-09', '2000-10') AND countries.country_iso_code IN ('GB', 'US') GROUP BY CUBE(channel_desc, calendar_month_desc, countries.country_iso_code); I can capture the source datastores, the filters and joins using ODI's dataset (or as a traditional flow) which enables us to incrementally design the mapping and the aggregate component for the sum and group by as follows; In the above mapping you can see the joins and filters declared in ODI's dataset, allowing you to capture the relationships of the datastores required in an entity-relationship style just like ODI 11g. The mix of ODI's declarative design and the common flow design provides for a familiar design experience. The example below illustrates flow design (basic arbitrary ordering) - a table load where only the employees who have maximum commission are loaded into a target. The maximum commission is retrieved from the bonus datastore and there is a look using employees as the driving table and only those with maximum commission projected. Hopefully this has given you a taster for some of the new capabilities provided by the aggregate component in ODI 12c. In summary, the actions should be much more consistent in behavior and more easily discoverable for users, the use of the components in a flow graph also supports arbitrary designs and the tool (rather than the interface designer) takes care of the realization using ODI's knowledge modules. Interested to know if a deep dive into each component is interesting for folks. Any thoughts? 

    Read the article

  • Which are the cons of using only non-member functions and POD?

    - by Miro
    I'm creating my own game engine. I've read these articles and this question about DOD and it was written to not use member functions and classes. I also heard some criticism to this idea. I can write it using member functions or non-member functions it would be similar. So what are the benefits/cons of that approach or when the project grows, does any of these approaches give clearer and better manageable code? With POD & non-member functions I don't have to make struct members public I can still use object id outside of engine like OpenGL does with all it's stuff, so It's not about encapsulation. POD - plain old data DOD - data oriented design

    Read the article

  • How to enable SSIS as data source type on SQL Server Reporing Services SSRS 2008 R2

    when you create a data source in SSRS 2008 R2 (Nov CTP), you won't be able to get SSIS listed as a data source type. Therefore applications that are already using it as a data source or applications that require it as a data source get stuck. Let's learn how to enable and get SSIS listed back as a data source in SSRS 2008 R2. SQL Server monitoring made easy "Keeping an eye on our many SQL Server instances is much easier with SQL Response." Mike Lile.Download a free trial of SQL Response now.

    Read the article

  • Programming language features that help to catch bugs early

    - by Christian Neumanns
    Do you know any programming language features that help to detect bugs early in the software development process - ideally at compile-time or else as early as possible at run-time? Examples of well-known and effective bug-reducing features are: Static typing and generic types: type incompatibility errors are detected by the compiler Design by Contract (TM), also called Contract Programming: invalid values are quickly detected at runtime (through preconditions, postconditions and class invariants) Unit testing I ask this question in the context of improving an object-oriented programming language (called Obix) which has been designed from the ground up to 'make it easy to quickly write reliable code'. Besides the features mentioned above this language also incorporates other Fail-fast features such as: Objects are immutable by default Void (null) values are not allowed by default The aim is to add more Fail-fast concepts to the language. If you know other features which help to write less error-prone code then please let us know. Thank you.

    Read the article

  • Forbes Article on Big Data and Java Embedded Technology

    - by hinkmond
    Whoa, cool! Forbes magazine has an online article about what I've been blogging about all this time: Big Data and Java Embedded Technology, tying it all together with a big bow, connecting small devices to the data center. See: Billions of Java Embedded Devices Here's a quote: By the end of the decade we could see tens of billions of new Internet-connected devices... with billions of Internet- connected devices generating Big Data, are the next big thing. ... That’s why Oracle has put together an ecosystem of solutions for this new, Big Data-oriented device-to-data center world: secure, powerful, and adaptable embedded Java for intelligent devices, integrated middleware... This is the next big thing. Java SE Embedded Technology is something to watch for in the new year. Start developing for it now to get a head-start... Hinkmond

    Read the article

  • Google I/O 2010 - Data migration in App Engine

    Google I/O 2010 - Data migration in App Engine Google I/O 2010 - Data migration in App Engine App Engine 201 Matthew Blain Learn about the App Engine bulk loader and see an example of migrating data from an external data source into the app engine datastore--and back out. Do you have data stored in a traditional, relational DB which you'd like to upload to App Engine? This session will teach you how. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 6 0 ratings Time: 44:26 More in Science & Technology

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >