Search Results

Search found 69357 results on 2775 pages for 'data oriented design'.

Page 214/2775 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • Webcast: AutoInvoice Overview & Data Flow

    - by Annemarie Provisero-Oracle
    Webcast: AutoInvoice Overview & Data Flow Date: June 4, 2014 at 11:00 am ET, 9:00 am MT, 4:00 pm GMT, 8:30 pm IST This one-hour session is part one of a three part series on AutoInvoice and is recommended for technical and functional users who would like a better understanding of what AutoInvoice does, required setups and how data flows through the process. We will also cover diagnostic scripts used in with AutoInvoice. Topics will include: Why Using AutoInvoice? AutoInvoice Setups Data flow Diagnostic tools Details & Registration: Doc ID 1671931.1

    Read the article

  • BigQuery: Simple example of a data collection and analysis pipeline + Your questions

    BigQuery: Simple example of a data collection and analysis pipeline + Your questions Join Michael Manoochehri and Ryan Boyd live to talk about Google BigQuery. We'll give an overview of how we're using our cars, phones, App Engine and BigQuery to collect and analyze data. We'll be discussing our trusted tester feature which allows analyzing data from the App Engine datastore. We'll also review some of the more interesting questions from Stack Overflow and take questions via Google Moderator. From: GoogleDevelopers Views: 250 16 ratings Time: 26:53 More in Science & Technology

    Read the article

  • Moving dozens of existing standalone retail sites to one central inventory database: what should I know going in?

    - by palintropos
    This will be the first project of this scale that I have attempted, and the first time I have run a website at all (much less dozens) using an off-site database. In particular, I'd like to know: what sort of optimizations I should read up on to make this run as smoothly as possible? any pitfalls/gotchas wiser, more experienced folk are aware of I should be on the lookout for, and what damage-control and preventative measures I should take against the nightmare scenario of the main server (hosting the database) having an outage, grinding over 100 websites to a halt (because they have no access to the product data).

    Read the article

  • Silverlight 4 + RIA Services - Ready for Business: Exposing Data from Entity Framework

    To continue our series I wanted to look next at how to expose your data from the server side of your application.  The interesting data in your business applications come from a wide variety of data sources.  From a SQL Database, from Oracle DB, from Sql Azure, from Sharepoint, from a mainframe and you have likely already chosen a datamodel such as NHibernate, Linq2Sql, Entity Framework, Stored Proc, a service.   The goal of RIA Service in this release is to make it easy to...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Synthetic database records

    - by michipili
    Assume we are getting some statistics from a customer which we analyse and we send our comments to the customer. Now, the customer tells us that the statistic they computed between January and March are based on a wrong methodology and sends us corrected series. We want perform analysis with the wrong and with the correct set of data, which are huge and only differ from January to March. Therefore, we need something like synthetic database records implementing the following logic: synthetic[1] = wrong_data synthetic[2] = correct_data between Januar and March, wrong_data otherwise With this, we can easily perform our analyses on synthetic records. Should such synthetic records be implemented in the application logic or on the side of the database? What are common pitfalls of such an implementation?

    Read the article

  • Master Data Management - The Trend Towards Multi-Domain and Other Realities

    - by Mala Narasimharajan
    In my quest to keep my fingers on the pulse of MDM, I recently found a pretty interesting article.  The article was published in Information Week and provides some interesting statistics from a recent survey conducted by the analyst firm, The Information Difference.  Let's take a look: Of the 130 organizations surveyed, 53% have live operational MDM implementations 81% of those with live operational MDM implementations report broad success - a huge improvement over 2011's 54% 64% developed a business case prior to their MDM deployment, while a daring 32% went ahead without a business case.    The article goes on to talk about the shift in vendors from focusing on customer data and product information management to one that is oriented around multi-domain master data management as well as other realities around MDM.  Take a look at the article. For more information on Oracle's master data management suite, click here. 

    Read the article

  • How should I implement the repository pattern for complex object models?

    - by Eric Falsken
    Our data model has almost 200 classes that can be separated out into about a dozen functional areas. It would have been nice to use domains, but the separation isn't that clean and we can't change it. We're redesigning our DAL to use Entity Framework and most of the recommendations that I've seen suggest using a Repository pattern. However, none of the samples really deal with complex object models. Some implementations that I've found suggest the use of a repository-per-entity. This seems ridiculous and un-maintainable for large, complex models. Is it really necessary to create a UnitOfWork for each operation, and a Repository for each entity? I could end up with thousands of classes. I know this is unreasonable, but I've found very little guidance implementing Repository, Unit Of Work, and Entity Framework over complex models and realistic business applications.

    Read the article

  • Failure to download extra data files

    - by armanke13
    After fresh install 12.04 I && updating apt and system restart, I always get this annoying message after reboot : Failure to download extra data files The following packages requested additional data downloads after package installation, but the data could not be downloaded or could not be processed. ttf-mscorefonts-installer The download will be attempted again later, or you can try the download again now. Running this command requires an active Internet connection. But if I run attempt now, it shows flashing terminal window and like nothing happen. It'll happen again when I restart system. I found someone have this problem too, but he haven't replied yet. I'm a newbie here, please help, thanks ^^

    Read the article

  • New eBook: In-Memory Data Grids for Dummies

    - by jeckels
    We've just released a new eBook In-Memory Data Grids for Dummies. This is a fantastic resource if you're looking to explain in-memory data grids to colleagues, convince your boss of their value, or even discover some new use cases for your existing investment. In true "Dummies" style, this eBook will walk you through the basics tenets of in-memory data grids, their common use cases, where IMDGs sit in your architecture, and some key considerations when looking to implement them. While the title may say "Dummies," we know you'll find some useful overview and technical information in the resource. It's published by us on the Coherence team in partnership with Wiley (the "Dummies" company), but it's not only about Coherence or Oracle. In fact, we took pains to make this book fairly neutral to give you the best information, not a product pitch. Happy reading! Download the eBook now 

    Read the article

  • Testing complex compositions

    - by phlipsy
    I have a rather large collection of classes which check and mutate a given data structure. They can be composed via the composition pattern into arbitrarily complex tree-like structures. The final product contains a lot of these composed structures. My question is now: How can I test those? Albeit it is easy to test every single unit of these compositions, it is rather expensive to test the whole compositions in the following sense: Testing the correct layout of the composition-tree results in a huge number of test cases Changes in the compositions result in a very laborious review of every single test case What is the general guideline here?

    Read the article

  • Importing Data From Excel Using SSIS - Part 1

    Recently while working on a project to import data from an Excel worksheet using SSIS, I realized that sometimes the SSIS Package failed even though when there were no changes in the structure/schema of the Excel worksheet. I investigated it and I noticed that the SSIS Package succeeded for some set of files, but for others it failed. I found that the structure/schema of the worksheet from both these sets of Excel files were the same, the data was the only difference. How come just changing the data can make an SSIS Package fail? What actually causes this failure? What can we do to fix it?

    Read the article

  • I need to get past my permissions to recover data

    - by adsmz
    Due to some mishaps, I am unable to boot into Kubuntu at all. However, my data is still on the hard drive. I managed to get one of the other two computers to which I have access to read the disk by booting into a liveCD session of kubuntu. The only storage medium to which I have access is a 30 GB data stick. Here's where the trouble starts: In music alone, I have to back up about 60 GB. Obviously this is going to have to be split into chunks and moved over to the second spare PC until I can reinstall Kubuntu on my laptop. All of the data that needs backed up is behind a permissions wall, so while I can view it, I can't interact with it directly. I know copying and moving through the terminal can get around this with sudo cp or sudo mv, but is there a way to first compress multiple folders in a single archive, then move it? (While we're on the subject, what compression method would be best for large volumes of music in MP3, WAV, and OGG format?)

    Read the article

  • handling long running large transactions with perl dbi

    - by 1stdayonthejob
    I've got a large transaction comprising of getting lots of data from database A, do some manipulations with this data, then inserting the manipulated data into database B. I've only got permissions to select in database A but I can create tables and insert/update etc in database B. The manipulation and insertion part is written in perl and already in use for loading data into database B from other data sources, so all that's required is to get the necessary data from database A and using it to initialize the perl classes. How can I go about doing this so I can easily track back and pick up from where the error happened if any error occurs during the manipulation or insertion procedures (database disconnection, problems with class initialization because of invalid values, hard disk failure etc...)? Doing the transaction in one go doesn't seem like a good option because the amount data from database A means it would take at least a day or 2 for data manipulation and insertion into database B. The data from database A can be grouped into around 1000 groups using unique keys, with each key containing 1000s of rows each. One way I thought I could do is to write a script that does commits per group, meaning I've got to track which group has already been inserted into database B. The only way I can think of to track the progress of which groups have been processed or not is either in a log file or in a table in database B. A second way I thought could work is to dump all the necessary fields needed for loading the classes for manipulation and insertion into a flatfile, read the file to initialize the classes and insert into database B. This also means that I got to do some logging, but should narrow it down to the exact row in the flatfile if any error occurs. The script will look something like this: use strict; use warnings; use DBI; #connect to database A my $dbh = DBI->connect('dbi:oracle:my_db', $user, $password, { RaiseError => 1, AutoCommit => 0 }); #statement to get data based on group unique key my $sth = $dbh->prepare($my_sql); my @groups; #I have a list of this already open my $fh, '>>', 'my_logfile' or die "can't open logfile $!"; eval { foreach my $g (@groups){ #subroutine to check if group has already been processed, either from log file or from database table next if is_processed($g); $sth->execute($g); my $data = $sth->fetchall_arrayref; #manipulate $data, then use it to load perl classes for insertion into database B #. #. #. } print $fh "$g\n"; }; if ($@){ $dbh->rollback; die "something wrong...rollback"; } So if any errors do occur, I can just run this script again and it should skip the groups or rows that have been processed and continue. Both these methods is just variations on the same theme, and both require going back to where I've been tracking my progress (in table or file), skip the ones that've been commited to database B and process the remaining data. I'm sure there's a better way of doing this but am struggling to think of other solutions. Is there another way of handling large transactions between databases that require data manipulation between getting data out from one and inserting into another? The process doesn't need to be all in Perl, as long as I can reuse the perl classes for manipulating and inserting the data into the database.

    Read the article

  • AJAX Return Problem from data sent via jQuery.ajax

    - by Anthony Garand
    I am trying to receive a json object back from php after sending data to the php file from the js file. All I get is undefined. Here are the contents of the php and js file. data.php <?php $action = $_GET['user']; $data = array( "first_name" = "Anthony", "last_name" = "Garand", "email" = "[email protected]", "password" = "changeme"); switch ($action) { case '[email protected]': echo $_GET['callback'] . '('. json_encode($data) . ');'; break; } ? core.js $(document).ready(function(){ $.ajax({ url: "data.php", data: {"user":"[email protected]"}, context: document.body, data: "jsonp", success: function(data){renderData(data);} }); }); function renderData(data) { document.write(data.first_name); }

    Read the article

  • Entity Framework 4 POCO entities in separate assembly, Dynamic Data Website?

    - by steve.macdonald
    Basically I want to use a dynamic data website to maintain data in an EF4 model where the entities are in their own assembly. Model and context are in another assembly. I tried this http://stackoverflow.com/questions/2282916/entity-framework-4-self-tracking-entities-asp-net-dynamic-data-error but get an "ambiguous match" error from reflection: System.Reflection.AmbiguousMatchException was unhandled by user code Message=Ambiguous match found. Source=mscorlib StackTrace: at System.RuntimeType.GetPropertyImpl(String name, BindingFlags bindingAttr, Binder binder, Type returnType, Type[] types, ParameterModifier[] modifiers) at System.Type.GetProperty(String name) at System.Web.DynamicData.ModelProviders.EFTableProvider..ctor(EFDataModelProvider dataModel, EntitySet entitySet, EntityType entityType, Type entityClrType, Type parentEntityClrType, Type rootEntityClrType, String name) at System.Web.DynamicData.ModelProviders.EFDataModelProvider.CreateTableProvider(EntitySet entitySet, EntityType entityType) at System.Web.DynamicData.ModelProviders.EFDataModelProvider..ctor(Object contextInstance, Func1 contextFactory) at System.Web.DynamicData.ModelProviders.SchemaCreator.CreateDataModel(Object contextInstance, Func1 contextFactory) at System.Web.DynamicData.MetaModel.RegisterContext(Func`1 contextFactory, ContextConfiguration configuration) at WebApplication1.Global.RegisterRoutes(RouteCollection routes) in C:\dev\Puffin\Puffin.Prototype.Web\Global.asax.cs:line 42 at WebApplication1.Global.Application_Start(Object sender, EventArgs e) in C:\dev\Puffin\Puffin.Prototype.Web\Global.asax.cs:line 78 InnerException:

    Read the article

  • Properties in partial class not appearing in Data Sources window!

    - by Tim Murphy
    Entity Framework has created the required partial classes. I can add these partial classes to the Data Sources window and the properties display as expected. However, if I extend any of the classes in a separate source file these properties do not appear in the Data Sources window even after a build and refresh. All properties in partial classes across source files work as expected in the Data Sources window except when the partial class has been created with EF. EDIT: After removing the offending table for edm designer, adding back in it all works are expected. Hardly a long term solution. Anyone else come across a similar problem?

    Read the article

  • Data validation: fail fast, fail early vs. complete validation

    - by Vivin Paliath
    Regarding data validation, I've heard that the options are to "fail fast, fail early" or "complete validation". The first approach fails on the very first validation error, whereas the second one builds up a list of failures and presents it. I'm wondering about this in the context of both server-side and client-side data validation. Which method is appropriate in what context, and why? My personal preference for data-validation on the client-side is the second method which informs the user of all failing constraints. I'm not informed enough to have an opinion about the server-side, although I would imagine it depends on the business logic involved.

    Read the article

  • Building big, immutable objects without constructors having long parameter lists

    - by Malax
    Hi StackOverflow! I have some big (more than 3 fields) Objects which can and should be immutable. Every time I run into that case i tend to create constructor abominations with long parameter lists. It doesn't feel right, is hard to use and readability suffers. It is even worse if the fields are some sort of collection type like lists. A simple addSibling(S s) would ease the object creation so much but renders the object mutable. What do you guys use in such cases? I'm on Scala and Java, but i think the problem is language agnostic as long as the language is object oriented. Solutions I can think of: "Constructor abominations with long parameter lists" The Builder Pattern Thanks for your input!

    Read the article

  • How do functional programming languages work?

    - by eSKay
    I was just reading this excellent post, and got some better understanding of what exactly object oriented programming is, how Java implements it in one extreme manner, and how functional programming languages are a contrast. What I was thinking is this: if functional programming languages cannot save any state, how do they do some simple stuff like reading input from a user (I mean how do they "store" it), or storing any data for that matter? For example - how would this simple C thing translate to any functional programming language, for example haskell? #include<stdio.h> int main() { int no; scanf("%d",&no); return 0; }

    Read the article

  • Building big, immutable objects without using constructors having long parameter lists

    - by Malax
    Hi StackOverflow! I have some big (more than 3 fields) Objects which can and should be immutable. Every time I run into that case i tend to create constructor abominations with long parameter lists. It doesn't feel right, is hard to use and readability suffers. It is even worse if the fields are some sort of collection type like lists. A simple addSibling(S s) would ease the object creation so much but renders the object mutable. What do you guys use in such cases? I'm on Scala and Java, but i think the problem is language agnostic as long as the language is object oriented. Solutions I can think of: "Constructor abominations with long parameter lists" The Builder Pattern Thanks for your input!

    Read the article

  • Is it Possible to Query Multiple Databases with WCF Data Services?

    - by Mas
    I have data being inserted into multiple databases with the same schema. The multiple databases exist for performance reasons. I need to create a WCF service that a client can use to query the databases. However from the client's point of view, there is only 1 database. By this I mean when a client performs a query, it should query all databases and return the combined results. I also need to provide the flexibility for the client to define its own queries. Therefore I am looking into WCF Data Services, which provides the very nice functionality for client specified queries. So far, it seems that a DataService can only make a query to a single database. I found no override that would allow me to dispatch queries to multiple databases. Does anyone know if it is possible for a WCF Data Service to query against multiple databases with the same schema?

    Read the article

  • Why Use PHP OOP over Basic Functions and When?

    - by Codex73
    There are some posts about this matter, but I didn't clearly get when to use Object Oriented coding and when to use programmatic functions in an include. Somebody also mentioned to me that OOP is very heavy to run, and makes more workload. Is this right? Lets say I have a big file with 50 functions, why will I want to call these in a class? and not by function_name(). Should I switch and create object which holds all of my functions? What will be the advantage or specific difference? What benefits does it bring to code OOP in php ? Modularity?

    Read the article

  • Is it a good idea to cache data from web services into a database?

    - by Thierry Lam
    Let's assume that Stackoverflow offers web services where you can retrieve all the questions asked by a specific user. A request to get all question from user A can result in the following json output: { { "question": "What is rest?", "date_created": "20/02/2010", "votes": 1, }, { "question": "Which database to use for ...", "date_created": "20/07/2009", "votes": 5, }, } If I want to manipulate and present the data in any ways that I want, will it be wise to dump it in a local database? At some point, I will also want to retrieve all answers for each question and store them in a local database. The workflow that I'm thinking is: User logs in. Web services retrieve all questions asked by the logged in user, dump them in a local database. User wants all answers for a specific question, another web service does the retrieval and dump them in a local database. After user logs out, delete from the local database all questions and answers from that user.

    Read the article

  • What are pointers to class members used for?

    - by srikfreak
    I have read about pointers to class members, but I have never seen them being used in any practical applications. Can someone explain what are the use cases of such pointers? Is it really necessary to have such pointers? Eg. class abc { public: int a; abc(int val) { a = val; } }; int main { int abc::*data; abc obj(5); data = &abc::a; cout << "Value of a is " << obj.*data << endl; return 0; } In the above eg. why is the value of 'a' accessed in this manner? What is the advantage of using pointers to class members?

    Read the article

  • How to unit test internals (organization) of a data structure?

    - by Herms
    I've started working on a little ruby project that will have sample implementations of a number of different data structures and algorithms. Right now it's just for me to refresh on stuff I haven't done for a while, but I'm hoping to have it set up kind of like Ruby Koans, with a bunch of unit tests written for the data structures but the implementations empty (with full implementations in another branch). It could then be used as a nice learning tool or code kata. However, I'm having trouble coming up with a good way to write the tests. I can't just test the public behavior as that won't necessarily tell me about the implementation, and that's kind of important here. For example, the public interfaces of a normal BST and a Red-Black tree would be the same, but the RB Tree has very specific data organization requirements. How would I test that?

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >