Search Results

Search found 59643 results on 2386 pages for 'data migration'.

Page 783/2386 | < Previous Page | 779 780 781 782 783 784 785 786 787 788 789 790  | Next Page >

  • Algorithm to measure how "diffused" 5,000 pennies are in an economy?

    - by makerofthings7
    Please allow me to use this example/metaphor to describe an algorithm I need. Objects There are 5 thousand pennies. There are 50 cups. There is a tracking history (Passport "stamp" etc) that is associated with each penny as it moves between cups. Definition I'll define a "highly diffused" penny as one that passes through many cups. A "poorly diffused" penny is one that either passes back and forth between 2 cups Question How can I objectively measure the diffusion of a penny as: The number of moves the penny has gone through The number of cups the penny has been in A unit of time (day, week, month) Why am I doing this? I want to detect if a cup is hoarding pennies. Resistance from bad actors Since hoarding is bad, the "bad cup" may simply solicit a partner and simply move pennies between each other. This will reduce the amount of time a coin isn't in transit, and would skew hoarding detection. A solution might be to detect if a cup (or set of cups) are common "partners" with each other, though I'm not sure how to think though this problem. Broad applicability Any assistance would be helpful, since I would think that this algorithm is common to Economics The study of migration patterns of animals, citizens of a country Other natural occurring phenomena ... and probably exists as a term or concept I'm unfamiliar with.

    Read the article

  • How to understand computer science as a whole?

    - by MrCellophane
    I am a college student in Computer Science, I have been studying CS for a long time. But even till today, I still feel so confused about a lot of things. First of all, I have solid foundation in alg, data structure, and OOP, but I don't have a clear understanding of the whole system of the subject. I studied database, alg, data structure, OOP, network, software engineering, OS, and a lot of courses. I know what they are, but I don't know how to connect them together. Is there a way to have a clear understanding of the architecture of the subject? And is there a way to know what technology is used to do what? For example, in interviews, when people ask me alg, data str, Java, OS or any other specific field, I could answer, but when they ask some other very general about the field, I have no idea. Well, I know my question maybe a little bit confusing, but what my situation is I don't even know how to ask a clear question. I don't know my question, it's totally a mess in my head. Is there a way to make it clear?

    Read the article

  • problem while displayin the texture image on view that works fine on iphone simulator but not on dev

    - by yunas
    hello i am trying to display an image on iphone by converting it into texture and then displaying it on the UIView. here is the code to load an image from an UIImage object - (void)loadImage:(UIImage *)image mipmap:(BOOL)mipmap texture:(uint32_t)texture { int width, height; CGImageRef cgImage; GLubyte *data; CGContextRef cgContext; CGColorSpaceRef colorSpace; GLenum err; if (image == nil) { NSLog(@"Failed to load"); return; } cgImage = [image CGImage]; width = CGImageGetWidth(cgImage); height = CGImageGetHeight(cgImage); colorSpace = CGColorSpaceCreateDeviceRGB(); // Malloc may be used instead of calloc if your cg image has dimensions equal to the dimensions of the cg bitmap context data = (GLubyte *)calloc(width * height * 4, sizeof(GLubyte)); cgContext = CGBitmapContextCreate(data, width, height, 8, width * 4, colorSpace, kCGImageAlphaPremultipliedLast); if (cgContext != NULL) { // Set the blend mode to copy. We don't care about the previous contents. CGContextSetBlendMode(cgContext, kCGBlendModeCopy); CGContextDrawImage(cgContext, CGRectMake(0.0f, 0.0f, width, height), cgImage); glGenTextures(1, &(_textures[texture])); glBindTexture(GL_TEXTURE_2D, _textures[texture]); if (mipmap) glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); else glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data); if (mipmap) glGenerateMipmapOES(GL_TEXTURE_2D); err = glGetError(); if (err != GL_NO_ERROR) NSLog(@"Error uploading texture. glError: 0x%04X", err); CGContextRelease(cgContext); } free(data); CGColorSpaceRelease(colorSpace); } The problem that i currently am facing is this code workd perfectly fine and displays the image on simulator where as on the device as seen on debugger an error is displayed i.e. Error uploading texture. glError: 0x0501 any idea how to tackle this bug.... thnx in advance 4 ur soluitons

    Read the article

  • [Oracle Identity Manager] 11g R2 Bundle Patch 09 is Available!

    - by mustafakaya
    Oracle Identity Manager Bundle Patch 09 is available now. You can download BP09 from here. Also,there is a important recommendation for BP08!  List of bugs fixed with BP09; Bug:12699224 : Trusted source reconciliation fails to create users with many reconciliation field mappings. Bug:14407437 : Provisioning through bulk request inserts null records into child tables. Bug:14493217 : Target resource reconciliation throws ORA-06512 error when the Descriptive field is mapped to a field that does not have a reconciliation field mapping. Bug:16044671 : User form customization fails if a UDF contains invalid character. Bug:16545968 : Modifying any attribute on a service account changes the account type as a primary account. Bug:16562633 : Oracle Identity Manager throws javax.el.elexceptions while viewing profile under direct report. Bug:16662834 : User not reprovisoned after user is deleted and created in the target with the same orclguid. Bug:16662905 : If an LOV field is required on an Application Instance form, no validation is enforced on the LOV field although it is required. Bug:16701873 : The Members tab of a role displays only enabled users and does not display disabled users. Bug:16862846 : When a notification is being sent, the mail ID in the Reply To field is set as the recipient's mail ID instead of the sender's mail ID. Bug:16824062 : When you use API to fetch or delete child data from an account, the child data row value is null. Therefore, child data is not returned. Bug:16912736 : There is a performance issue when the provisioned application instance details is opened for a user.

    Read the article

  • SQLite bulk insert on iPhone not working

    - by App_beginner
    Hi. I have been struggling with this seeminly easy problem for 48 hours, and I am no closer to a solution. So I was hoping that someone might be able to help me. I am building a app, that use a combination of a local (SQLite) database and an online database (PHP/MYSQL). The app is nearly finished. Checked for leaks and work like a charm. However the very last part is the part I have struggled with. On launch, I want the app to check for changes to the online databse, and if there is. I want it to download and parse a xml file containing the changes. Everything is working fine this far. But when I try to bulk insert my parsed data to my database, the app crashes, giving a NSInternalInconsistency error. Due to the database returning SQLITE_MISUSE. I have done a lot of googling, but am still unable to solve my problem. So I am putting the code here, hoping that someone can help me fix this. And I know that I should have used core data for this. But this is the very last part I am struggling with, and I am very reluctant to changing my entire code now. Core data will have to come in the update. Here is the error I recieve: Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Error while inserting data. 'library routine called out of sequence'' Here is my code: -(void)UpdateDatabase:(const char *)_query NewValues:(NSMutableArray *)_odb dbn:(NSString *)_dbn dbp:(NSString *)_dbp { sqlite3 *database; NSMutableArray *NewValues = _odb; int i; const char *query = _query; sqlite3_stmt *addStmt; for (i = 1; i < [NewValues count]; i++) { if(sqlite3_prepare_v2(database, query, -1, &addStmt, NULL) == SQLITE_OK) { sqlite3_bind_text(addStmt, 1, [[[NewValues objectAtIndex:i] name] UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text(addStmt, 2, [[[NewValues objectAtIndex:i] city]UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_double(addStmt, 3, [[[NewValues objectAtIndex:i] lat] doubleValue]); sqlite3_bind_int(addStmt, 4, [[[NewValues objectAtIndex:i] long] doubleValue]); sqlite3_bind_int(addStmt, 5, [[[NewValues objectAtIndex:i] code] intValue]); } if(SQLITE_DONE != sqlite3_step(addStmt)) { NSAssert1(0, @"Error while inserting data. '%s'", sqlite3_errmsg(database)); } //Reset the add statement. sqlite3_reset(addStmt); } }

    Read the article

  • Need advice on framework design: how to make extending easy

    - by beginner_
    I'm creating a framework/library for a rather specific use-case (data type). It uses diverse spring components, including spring-data. The library has a set of entity classes properly set up and according service and dao layers. The main work or main benefit of the framework lies in the dao and service layer. Developers using the framework should be able to extend my entity classes to add additional fields they require. Therefore I made dao and service layer generic so it can be used by such extended entity classes. I now face an issue in the IO part of the framework. It must be able to import the according "special data type" into the database. In this part I need to create a new entity instance and hence need the actual class used. My current solution is to configure in spring a bean of the actual class used. The problem with this is that an application using the framework could only use 1 implementation of the entity (the original one from me or exactly 1 subclass but not 2 different classes of the same hierarchy. I'm looking for suggestions / desgins for solving this issue. Any ideas?

    Read the article

  • Oracle SOA Suite for healthcare integration Dashboard

    - by Nitesh Jain
    Oracle SOA Suite Healthcare came up with a new way of monitoring where user can configure a dashboard and follow the dynamic runtime changes.Oracle SOA Suite for healthcare integration dashboards display information about the current health of the endpoints in a healthcare integration application. You can create and configure multiple dashboards as needed to monitor the status and volume metrics for the endpoints you have defined. The Dashboards reflects changes that occur in the runtime repository, such as purging runtime instance data, new messages processed, and new error messages. You can display data for various time periods, and you can manually refresh the data in real time or set the dashboard to automatically refresh at set intervals.Dashboard shows the following information: Status: The current status of the endpoint, such as Running, Idle, Disabled, or Errors. Messages Sent: The number of messages sent by the endpoint in the specified time period. Messages Received: The number of messages received by the endpoint in the specified time period. Errors: The number of messages with errors for the endpoint in the given time period. Last Sent: The date and time the last message was sent from the endpoint. Last Received: The date and time the last message was received from the endpoint. Last Error: The date and time of the last error for the endpoint.  It also shows the detailed view of a specific Endpoint The document type. The number of messages received per second. The total number of message processed in the specified time period. The average size of each message.

    Read the article

  • Which Graphics/Geometry abstraction to choose?

    - by Robz
    I've been thinking about the design for a browser app on the HTML5 canvas that simulates a 2D robot zooming around, sensing the world around it. I decided to do this from scratch just for fun. I need shapes, like polygons, circles, and lines in order to model the robot and the world it lives in. These shapes need to be drawn with different appearance attributes, like border/fill style/width/color. I also need to have geometry functions to detect intersections and containment for the robot's sensors and so that the robot doesn't go inside stuff. One idea for functions is to have two totally separate libraries, one to implement graphics (like drawShape(context, shape)) and one for geometry operations (like shapeIntersectsShape(shape1, shape2)). Or, in a more object-oriented approach, the shape objects themselves could implement methods to do their own graphics (shape.draw(context)) and geometry operations (shape1.intersects(shape2)). Then there is the data itself: whether the data to draw a shape and the data to do geometric operations on that shape should be encapsulated within the same object, or be separate structures (where one would contain the other, or both be contained inside another structure). How do existing applications that do graphics/geometry stuff deal with this? Is there one model that is best, or is each good for certain applications? Should the fact that I'm using Javascript instead of a more classical language change how I approach the design?

    Read the article

  • ORM and component-based architecture

    - by EagleBeek
    I have joined an ongoing project, where the team calls their architecture "component-based". The lowest level is one big database. The data access (via ORM) and business layers are combined into various components, which are separated according to business logic. E.g., there's a component for handling bank accounts, one for generating invoices, etc. The higher levels of service contracts and presentation are irrelevant for the question, so I'll omit them here. From my point of view the separation of the data access layer into various components seems counterproductive, because it denies us the relational mapping capabilities of the ORM. E.g., when I want to query all invoices for one customer I have to identify the customer with the "customers" component and then make another call to the "invoices" component to get the invoices for this customer. My impression is that it would be better to leave the data access in one component and separate it from business logic, which may well be cut into various components. Does anybody have some advice? Have I overlooked something?

    Read the article

  • Is this a ridiculous way to structure a DB schema, or am I completely missing something?

    - by Jim
    I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers...

    Read the article

  • What are the requirements to test a website using jquery.get() ? [migrated]

    - by Frankie
    I am working on a simple website. It has to search quite a few text files in different sub-folders. The rest of the page uses jquery, so I would like to use it for this also. The function I am looking at is .get() for downloading the files. So my main question is, can I test this on my local computer (Ubuntu Linux) or do I have to have it uploaded to a server? Also, if there's a better way to go about this, that would be nice to know. However, I'm more worried about getting it working. Thanks, Frankie PS: Heres the JS/jQuery code for downloading the files to an array. g_lists = new Array(); $(":checkbox").each(function(i){ if ($(this).attr("name") != "0") { var path = "../" + $(this).attr("name") + ".txt"; $("#bot").append("<br />" + path); // debug $.get(path, function(data){ g_lists[i] = data; $("#bot").html(data); }); } else { g_lists[i] = ""; } }); Edit: Just a note about the path variable. I think it's correct, but I'm not 100% sure. I'm new to web development. Here's some examples it produces and the directory tree of the site. Maybe it will help, can't hurt. . +-- include ¦   +-- jquery.js ¦   +-- load.js +-- index.xhtml +-- style.css +-- txt    +-- Scripting_Tools    +-- Editors.txt    +-- Other.txt Examples of path: ../txt/Scripting_Tools/Editors.txt ../txt/Scripting_Tools/Other.txt Well I'm a new user, so I can't "answer" my own question, so I'll just post it here: After asking for help on a IRC chat channel specific to jQuery, I was told I could use this on a local host. To do this I installed Apache web server, and copied my site into it's directory. More information on setting it up can be found here: http://www.howtoforge.com/ubuntu_debian_lamp_server Then to run the site I navigated my browser to "localhost" and everything works.

    Read the article

  • Determine All SQL Server Table Sizes

    Im doing some work to migrate and optimize a large-ish (40GB) SQL Server database at the moment.  Moving such a database between data centers over the Internet is not without its challenges.  In my case, virtually all of the size of the database is the result of one table, which has over 200M rows of data.  To determine the size of this table on disk, you can run the sp_TableSize stored procedure, like so: EXEC sp_spaceused lq_ActivityLog This results in the following: Of course this is only showing one table if you have a lot of tables and need to know which ones are taking up the most space, it would be nice if you could run a query to list all of the tables, perhaps ordered by the space theyre taking up.  Thanks to Mitchel Sellers (and Gregg Starks CURSOR template) and a tiny bit of my own edits, now you can!  Create the stored procedure below and call it to see a listing of all user tables in your database, ordered by their reserved space. -- Lists Space Used for all user tablesCREATE PROCEDURE GetAllTableSizesASDECLARE @TableName VARCHAR(100)DECLARE tableCursor CURSOR FORWARD_ONLYFOR select [name]from dbo.sysobjects where OBJECTPROPERTY(id, N'IsUserTable') = 1FOR READ ONLYCREATE TABLE #TempTable( tableName varchar(100), numberofRows varchar(100), reservedSize varchar(50), dataSize varchar(50), indexSize varchar(50), unusedSize varchar(50))OPEN tableCursorWHILE (1=1)BEGIN FETCH NEXT FROM tableCursor INTO @TableName IF(@@FETCH_STATUS<>0) BREAK; INSERT #TempTable EXEC sp_spaceused @TableNameENDCLOSE tableCursorDEALLOCATE tableCursorUPDATE #TempTableSET reservedSize = REPLACE(reservedSize, ' KB', '')SELECT tableName 'Table Name',numberofRows 'Total Rows',reservedSize 'Reserved KB',dataSize 'Data Size',indexSize 'Index Size',unusedSize 'Unused Size'FROM #TempTableORDER BY CONVERT(bigint,reservedSize) DESCDROP TABLE #TempTableGO Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Another Exchange 2003 to Exchange 2010 mail flow issue

    - by Ryan Roussel
    During a migration recently, we came across another internal mail routing issue.  The symptoms were identical to my previous post about Exchange internal mail routing.  Mail was flowing from 2010 to 2003, from 2010 to the internet, but not from 2003 to 2010.   I went through the normal check list looking at permissions, DNS, and the routing group connectors.  I verified that both servers listed in the routing group connectors were the routing master in their respective routing groups through the 2003 ESM.  I also verified that inheritable permissions were enabled for the Exchange 2003 server object in the schema.  No luck with either.   For my previous post about this issue in which inheritable permissions were the culprit: Exchange 2010, Exchange 2003 Mail Flow issue   And for Routing Group issues: Exchange 2007 Routing Group Connector Mayhem   I finally enabled logging on the SMTP virtual server on Exchange 2003 and the Default Receive Connector on 2010 and sent a few test e-mails where I found 2003 was having issues authenticating to 2010.  By default 2003 uses Exchange Server Authentication to communicate to 2010. The exact error was: 4.7.0 Temporary Authentication Failure which was found in the SMTP logs on the Exchange 2003 side   After scouring based on this error, I found the solution:   The Access this computer from the network user rights in the local computer policy on the Exchange 2010 server were changed from the default.  The network administrator had modified the Default Domain policy and changed this user right assignment to only list Domain Users.   The fix was to clear this setting in the Default Domain policy,  force gpupdate to refresh the group policy settings, then ensure the appropriate users and groups were listed.   This immediately fixed the problem and the Exchange 2003 server was able to route mail to the Exchange 2010 mailboxes.   The default user rights assignments for Access this computer from the network On Workstations and Servers: Administrators Backup Operators Power Users Users Everyone On Domain Controllers: Administrators Authenticated Users Everyone More can be found here: http://technet.microsoft.com/en-us/library/cc740196(WS.10).aspx

    Read the article

  • Should components have sub-components in a component-based system like Artemis?

    - by Daniel Ingraham
    I am designing a game using Artemis, although this is more of philosophical question about component-based design in general. Let's say I have non-primitive data which applies to a given component (a Component "animal" may have qualities such as "teeth" or "diet"). There are three ways to approach this in data-driven design, as I see it: 1) Generate classes for these qualities using "traditional" OOP. I imagine this has negative implications for performance, as systems then must be made aware of these qualities in order to process them. It also seems counter to the overall philosophy of data-driven design. 2) Include these qualities as sub-components. This seems off, in that we are now confusing the role of components with that of entities. Moreover out of the box Artemis isn't capable of mapping these subcomponents onto their parent components. 3) Add "teeth", "diet", etc. as components to the overall entity alongside "animal". While this feels odd hierarchically, it may simply be a peculiarity of component-based systems. I suspect 3 is the correct way to think about things, but I was curious about other ideas.

    Read the article

  • Ensure Payroll Success with PeopleSoft Year-End Training for U.S. and Canada

    - by Breanne Cooley
    Year-end payroll processing and reporting is a requirement for your business. If you're responsible for completing these processes in either Canada or the United States using the PeopleSoft Payroll application, and if you're new to PeopleSoft Payroll or to performing these processes, consider enrolling in Oracle University's expert training. Our PeopleSoft Payroll specialists will guide you through the necessary steps to ensure you can smoothly and successfully perform your job. Training is specific to the country for which you are performing the processing and reporting. Training lasts one day and is delivered in our Live Virtual Class Format, which helps you avoid travel during this busy season. Here's the training we recommend: PeopleSoft Year-End Payroll - U.S. This course teaches you how to complete U.S. year-end processing and reporting using PeopleSoft Payroll for North America, step-by-step. Update tax reporting setup tables and update employees' income and tax records. Load each employee's year-end data into a single year-end record for processing and reporting.  Identify reports needed to reconcile the year-end data. Correct tax balances and other data as necessary. Generate final print and online W-2 forms and prepare the electronic file for the Social Security Administration.  Enter corrected W-2 information and print a W-2c form. Report periodic retirement distributions and related tax withholding amounts on form 1099-R.   Please Note: this course is intended for organizations using PeopleSoft release 8.81 or higher. PeopleSoft Year-End Payroll – Canada This course covers the steps necessary to perform Canadian year-end processing using Oracle's PeopleSoft Payroll for North America. Explore adjustments, balances, year-end slip processing, common pitfalls and errors and balancing reports.  Produce accurate year-end reporting results such as T4, T4A, RL-1 and RL-2.  Please Note: this course is intended for organizations using PeopleSoft release 8.81 or higher. See you in class! -Oracle University Marketing Team 

    Read the article

  • Http handler for classic ASP application for introducing a layer between client and server

    - by JPReddy
    I've a huge classic ASP application where in thousands of users manage their company/business data. Currently this is not multi-user so that application users can create users and authorize them to access certain areas in the system. I'm thinking of writing a handler which will act as middle man between client and server and go through every request and find out who the user is and whether he is authorized to access the data he is trying to. For the moment ignore about the part how I'm going to check the authorization and all that stuff. Just want to know whether I can implement a ASP.net handler and use it as middle man for the requests coming for a asp website? I just want to read the url and see what is the page user is trying to access and what are the parameters he is passing in the url the posted data. Is this possible? I read that Asp.net handler cannot be used with asp website and I need to use isapi filter or extensions for that and that can be developed only c/c++. Can anybody through some light on this and guide me whether I'm in the right direction or not?

    Read the article

  • Parameterized Django models

    - by mgibsonbr
    In principle, a single Django application can be reused in two or more projects, providing functionality relevent to both. That implies that the same database structure (tables and relations) will be re-created identically in different databases, and most times this is not a problem (assuming the projects/databases are unrelated - for instance when someone downloads a complete app to use in their own projects). Sometimes, however, the models must be "tweaked" a little to better fit the problem needs. This can be accomplished by forking the app, but I wondered if there wouldn't be a better option in cases where the app designer can anticipate the most common customizations. For instance, if I have a model that could relate to another as one-to-one or one-to-many, I could specify the unique property as a parameter, that can be specified in the project's settings: class This(models.Model): other = models.ForeignKey(Other, unique=settings.OTHER_TO_THIS) Or if a model can relate to many others, I could create an intermediate table for each of them (thus enforcing referential integrity) instead of using generic fks: for related in settings.MODELS_RELATED_TO_OTHER: model_name = '%s_Other' % related globals()[model_name] = type(model_name, (models.Model,) { me:models.ForeignKey(find_model_class(related)), other:models.ForeignKey(Other), # Some other properties all intersection tables must have }) Etc. Let me stress out that I'm not proposing to change the models at runtime nor anything like that; once the parameters were defined and syncdb called for the first time, those parameters are not to be changed again (unless you're doing a schema migration). Is this a good design? Are there better ways to accomplish the same thing, or maybe drawbacks I coulnd't anticipate? This technique is meant to be used sparingly (only on apps meant to be reused in wildly different contexts, and only when a specific need of customization can be detected while the app model is being designed).

    Read the article

  • The MDM Journey: From the Customer Perspective

    - by Mala Narasimharajan
    Master Data Management is more than just about a single version of  the truth or providing a 360 degree view of the customer.  It spans multiple domains ranging from customers to suppliers to products and beyond.  MDM is pivotal to providing a solid customer experience - one that results in repeat business, continued loyalty and last but not least - high customer satisfaction.  Customer experience is not only defined as accurate information about the customer for the enterprise, but also presenting the customer with the right information about products, orders, product availability, etc.   Let's take a look at a couple of customer use cases with Oracle MDM. Below is a picture from a recent customer panel: Oracle MDM is a key platform for increasing upsell/cross-sell opportunities, improve targeting of customers and uncover new sales opportunies, reduce inaccuracies in mailing marketing materials to prospects, as well as to tap into and uncover the full value of a customer across business units more accurately.  A leading investment and private bank leverages Oracle MDM to do a better job of identifying clients, their levels of investment as well as consistently manage them through a series of areas such as credit, risk, new accounts, etc. Ultimately, they are looking to understand client investments and touchpoints across the company's offerings.  Another use case for Oracle MDM is with a major financial and insurance services company with clients worldwide, looking to resolve customer data inaccuracies and client information stored differently across mulitiple systems.  For more information on Oracle Master Data Management, click here.  

    Read the article

  • What to do when TDD tests reveal new functionality that is needed that also needs tests?

    - by Joshua Harris
    What do you do when you are writing a test and you get to the point where you need to make the test pass and you realize that you need an additional piece of functionality that should be separated into its own function? That new function needs to be tested as well, but the TDD cycle says to Make a test fail, make it pass then refactor. If I am on the step where I am trying to make my test pass I'm not supposed to go off and start another failing test to test the new functionality that I need to implement. For example, I am writing a point class that has a function WillCollideWith(LineSegment): public class Point { // Point data and constructor ... public bool CollidesWithLine(LineSegment lineSegment) { Vector PointEndOfMovement = new Vector(Position.X + Velocity.X, Position.Y + Velocity.Y); LineSegment pointPath = new LineSegment(Position, PointEndOfMovement); if (lineSegment.Intersects(pointPath)) return true; return false; } } I was writing a test for CollidesWithLine when I realized that I would need a LineSegment.Intersects(LineSegment) function. But, should I just stop what I am doing on my test cycle to go create this new functionality? That seems to break the "Red, Green, Refactor" principle. Should I just write the code that detects that lineSegments Intersect inside of the CollidesWithLine function and refactor it after it is working? That would work in this case since I can access the data from LineSegment, but what about in cases where that kind of data is private?

    Read the article

  • How to perform game object smoothing in multiplayer games

    - by spaceOwl
    We're developing an infrastructure to support multiplayer games for our game engine. In simple terms, each client (player) engine sends some pieces of data regarding the relevant game objects at a given time interval. On the receiving end, we step the incoming data to current time (to compensate for latency), followed by a smoothing step (which is the subject of this question). I was wondering how smoothing should be performed ? Currently the algorithm is similar to this: Receive incoming state for an object (position, velocity, acceleration, rotation, custom data like visual properties, etc). Calculate a diff between local object position and the position we have after previous prediction steps. If diff doesn't exceed some threshold value, start a smoothing step: Mark the object's CURRENT POSITION and the TARGET POSITION. Linear interpolate between these values for 0.3 seconds. I wonder if this scheme is any good, or if there is any other common implementation or algorithm that should be used? (For example - should i only smooth out the position? or other values, such as speed, etc) any help will be appreciated.

    Read the article

  • Best design for a "Command Executer" class

    - by Justin984
    Sorry for the vague title, I couldn't think of a way to condense the question. I am building an application that will run as a background service and intermittently collect data about the system its running on. A second Android controller application will query the system over tcp/ip for statistics about the system. Currently, the background service has a tcp listener class that reads/writes bytes from a socket. When data is received, it raises an event to notify the service. The service takes the bytes, feeds them into a command parser to figure out what is being requested, and then passes the parsed command to a command executer class. When the service receives a "query statistics" command, it should return statistics over the tcp/ip connection. Currently, all of these classes are fully decoupled from each other. But in order for the command executer to return statistics, it will obviously need access to the socket somehow. For reasons I can't completely articulate, it feels wrong for the command executer to have a direct reference to the socket. I'm looking for strategies and/or design patterns I can use to return data over the socket while keeping the classes decoupled, if this is possible. Hopefully this makes sense, please let me know if I can include any info that would make the question easier to understand.

    Read the article

  • ArchBeat Link-o-Rama for October 18, 2013

    - by OTN ArchBeat
    Enriching XMLType data using relational data – XQuery and fn:collection in action | Lucas Jellema Another detailed technical post from the always prolific Lucas Jellema. Evil Behind ChangeEventPolicy PPR in CRUD ADF 12c and WebLogic Stuck Threads | Andrejus Baranovskis The latest post from Oracle ACE Director Andrejus Baranovskis is a bit of a preview of his presentation at the upcoming UKOUG 2013 event. Podcast: Interview with authors of "Hudson Continuous Integration in Practice" For your listening pleasure... Here's an Oracle Author Podcast Interview with "Hudson Continuous Integration in Practice" authors Ed Burns and Winston Prakash. Manual Recovery Mechanisms in SOA Suite and AIA | Shreenidhi Raghuram Solution architect Shreenidhi Raghuram's post combines information from several sources to provide "a quick reference for Manual Recovery of Faults within the SOA and AIA contexts." Event: Harnessing Oracle Weblogic and Oracle Coherence This OTN Virtual Developer Day event features eight sessions in two tracks, with presentations and hands-on labs for developers and architects delivered by experts in Weblogic, Coherence, and ADF. Registration is free. November 5th, 2013. 9am-1pm PT / 12pm-4pm ET / 1pm-5pm BRT Podcast: IoT Challenges and Opportunities - Part 2 Part 2 of the OTN ArchBeat Internet of Things podcast features a roundtable discussion of IoT challenges: massive data streams, security and privacy issues, evolving standards and protocols. Listen! Video: Design - ADF Architectural Patterns - Two for One Deal | Chris Muir Chris Muir explores the reuse of BTF workspaces across multiple applications and the advantages and disadvantages of reuse at the application level. Thought for the Day "Can't nothing make your life work if you ain't the architect." — Terry McMillan, American author (Born October 18, 1951) Source: brainyquote.com

    Read the article

  • java multipart POST library

    - by tom
    Is there a multipart POST library out there that achieve the same effect of doing a POST from a html form? for example - upload a file programmingly in Java versus upload the file using a html form. And on the server side, it just blindly expect the request from client side to be a multipart POST request and parse out the data as appropriate. Has anyone tried this? specifically, I am trying to see if I can simulate the following with Java The user creates a blob by submitting an HTML form that includes one or more file input fields. Your app sets blobstoreService.createUploadUrl() as the destination (action) of this form, passing the function a URL path of a handler in your app. When the user submits the form, the user's browser uploads the specified files directly to the Blobstore. The Blobstore rewrites the user's request and stores the uploaded file data, replacing the uploaded file data with one or more corresponding blob keys, then passes the rewritten request to the handler at the URL path you provided to blobstoreService.createUploadUrl(). This handler can do additional processing based on the blob key. Finally, the handler must return a headers-only, redirect response (301, 302, or 303), typically a browser redirect to another page indicating the status of the blob upload. Set blobstoreService.createUploadUrl as the form action, passing the application path to load when the POST of the form is completed. <body> <form action="<%= blobstoreService.createUploadUrl("/upload") %>" method="post" enctype="multipart/form-data"> <input type="file" name="myFile"> <input type="submit" value="Submit"> </form> </body> Note that this is how the upload form would look if it were created as a JSP. The form must include a file upload field, and the form's enctype must be set to multipart/form-data. When the user submits the form, the POST is handled by the Blobstore API, which creates the blob. The API also creates an info record for the blob and stores the record in the datastore, and passes the rewritten request to your app on the given path as a blob key.

    Read the article

  • What are my choices for server side sandboxed scripting?

    - by alfa64
    I'm building a public website where users share data and scripts to run over some data. The scripts are run serverside in some sort of sandbox without other interaction this cycle: my Perl program reads from a database a User made script, adds the data to be processed into the script ( ie: a JSON document) then calls the interpreter, it returns the response( a JSON document or plain text), i save it to the database with my perl script. The script should be able to have some access to built in functions added to the scripting language by myself, but nothing more. So i've stumbled upon node.js as a javascript interpreter, and and hour or so ago with Google's V8(does v8 makes sense for this kind of thing?). CoffeeScript also came to my mind, since it looks nice and it's still Javascript. I think javascript is widespread enough and more "sandboxeable" since it doesn't have OS calls or anything remotely insecure ( i think ). by the way, i'm writing the system on Perl and Php for the front end. To improve the question: I'm choosing Javascript because i think is secure and simple enough to implement with node.js, but what other alternatives are for achieving this kind of task? Lua? Python? I just can't find information on how to run a sandboxed interpreter in a proper way.

    Read the article

  • UIScrollView Infinite Scrolling

    - by Ben Robinson
    I'm attempting to setup a scrollview with infinite (horizontal) scrolling. Scrolling forward is easy - I have implemented scrollViewDidScroll, and when the contentOffset gets near the end I make the scrollview contentsize bigger and add more data into the space (i'll have to deal with the crippling effect this will have later!) My problem is scrolling back - the plan is to see when I get near the beginning of the scroll view, then when I do make the contentsize bigger, move the existing content along, add the new data to the beginning and then - importantly adjust the contentOffset so the data under the view port stays the same. This works perfectly if I scroll slowly (or enable paging) but if I go fast (not even very fast!) it goes mad! Heres the code: - (void) scrollViewDidScroll:(UIScrollView *)scrollView { float pageNumber = scrollView.contentOffset.x / 320; float pageCount = scrollView.contentSize.width / 320; if (pageNumber > pageCount-4) { //Add 10 new pages to end mainScrollView.contentSize = CGSizeMake(mainScrollView.contentSize.width + 3200, mainScrollView.contentSize.height); //add new data here at (320*pageCount, 0); } //*** the problem is here - I use updatingScrollingContent to make sure its only called once (for accurate testing!) if (pageNumber < 4 && !updatingScrollingContent) { updatingScrollingContent = YES; mainScrollView.contentSize = CGSizeMake(mainScrollView.contentSize.width + 3200, mainScrollView.contentSize.height); mainScrollView.contentOffset = CGPointMake(mainScrollView.contentOffset.x + 3200, 0); for (UIView *view in [mainContainerView subviews]) { view.frame = CGRectMake(view.frame.origin.x+3200, view.frame.origin.y, view.frame.size.width, view.frame.size.height); } //add new data here at (0, 0); } //** MY CHECK! NSLog(@"%f", mainScrollView.contentOffset.x); } As the scrolling happens the log reads: 1286.500000 1285.500000 1284.500000 1283.500000 1282.500000 1281.500000 1280.500000 Then, when pageNumber<4 (we're getting near the beginning): 4479.500000 4479.500000 Great! - but the numbers should continue to go down in the 4,000s but the next log entries read: 1278.000000 1277.000000 1276.500000 1275.500000 etc.... Continiuing from where it left off! Just for the record, if scrolled slowly the log reads: 1294.500000 1290.000000 1284.500000 1280.500000 4476.000000 4476.000000 4473.000000 4470.000000 4467.500000 4464.000000 4460.500000 4457.500000 etc.... Any ideas???? Thanks Ben.

    Read the article

< Previous Page | 779 780 781 782 783 784 785 786 787 788 789 790  | Next Page >