Search Results

Search found 69240 results on 2770 pages for 'oracle data masking'.

Page 224/2770 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • Core Data Errors vs Exceptions Part 3

    - by John Gallagher
    My question is similar to this one. Background I'm creating a large number of objects in a core data store using NSOperations to speed things up. I've followed all the Core Data multithreading rules - I've got a single persistent store coordinator and a managed object context per thread that on save is merging back to the main managed object context. The Problem When the number of threads running at once is more than 1, I get the exception logged on save of my core data store: NSExceptionHandler has recorded the following exception: NSInternalInconsistencyException -- optimistic locking failure What I've Tried My code that creates new entities is quite complex - it makes entities that have relationships with other entities that could be being created in a separate thread. If I replace my object creation routine with some very simple code just making non-related entries, everything works perfectly. Initially, as well as the exceptions, I was getting a save error saying core data couldn't save due to the merge failing. I read the docs and realised I needed a merge policy on the Managed Object Context I was saving to. I set this up and as this question states, the save error goes away, but the exception remains. My Question Do I need to worry about these exceptions? If I do need to get rid of the exceptions, any ideas on how I do it?

    Read the article

  • How to reduce a data frame keeping the order for other columns

    - by betabandido
    I am trying to reduce a data frame using the max function on a given column. I would like to preserve other columns but keeping the values from the same rows where each maximum value was selected. An example will make this explanation easier. Let us assume we have the following data frame: dframe <- data.frame(list(BENCH=sort(rep(letters[1:4], 4)), CFG=rep(1:4, 4), VALUE=runif(4 * 4) )) This gives me: BENCH CFG VALUE 1 a 1 0.98828096 2 a 2 0.19630597 3 a 3 0.83539540 4 a 4 0.90988296 5 b 1 0.01191147 6 b 2 0.35164194 7 b 3 0.55094787 8 b 4 0.20744004 9 c 1 0.49864470 10 c 2 0.77845408 11 c 3 0.25278871 12 c 4 0.23440847 13 d 1 0.29795494 14 d 2 0.91766057 15 d 3 0.68044728 16 d 4 0.18448748 Now, I want to reduce the data in order to select the maximum VALUE for each different BENCH: aggregate(VALUE ~ BENCH, dframe, FUN=max) This gives me the expected result: BENCH VALUE 1 a 0.9882810 2 b 0.5509479 3 c 0.7784541 4 d 0.9176606 Next, I tried to preserve other columns: aggregate(cbind(VALUE, CFG) ~ BENCH, dframe, FUN=max) This reduction returns: BENCH VALUE CFG 1 a 0.9882810 4 2 b 0.5509479 4 3 c 0.7784541 4 4 d 0.9176606 4 Both VALUE and CFG are reduced using max function. But this is not what I want. For instance, in this example I would like to obtain: BENCH VALUE CFG 1 a 0.9882810 1 2 b 0.5509479 3 3 c 0.7784541 2 4 d 0.9176606 2 where CFG is not reduced, but it just keeps the value associated to the maximum VALUE for each different BENCH. How could I change my reduction in order to obtain the last result shown?

    Read the article

  • Quickly generate junk data of certain size in Javascript

    - by user1357607
    I am writing an upload speed test in Javascript. I am using Jquery (and Ajax) to send chunks of data to a server in order to time how long it takes to get a response. This should, in theory give an estimation, of the upload speed. Of course to cater for different bandwidths of the user I sequentially upload larger and larger amounts of junk data until a threshold duration is reached. Currently I generate the junk data using the following function, however, it is very slow when generation megabytes of data. function generate_random_data(size){ var chars = "abcdefghijklmnopqrstuvwxyz"; var random_data = ""; for (var i = 0; i < size; i++){ var random_num = Math.floor(Math.random() * char.length); random_data = random_data + chars.substring(random_num,random_num+1); } return random_data; Really all I am doing is generating a chunk of bytes to send to the server, however, this is the only way I could find out how in Javascript. Any help would be appreciated.

    Read the article

  • Data transformation question

    - by tkm
    I have data composed of a list of employers and a list of workers. Each has a many-to-many relationship with the other (so an employer can have many workers, and a worker can have many employers). The way the data is retrieved (and given to me) is as follows: each employer has an array of workers. In other words: employer n has: worker x, worker y etc. So I have a bunch of employer objects each containing an array of workers. I need to transform this data (and basically invert the relationship). I need to have a bunch of worker objects, each containing and array of employers. In other words: worker x has: employer n1, employer n2 etc. The context is hypothetical so please don't comment on why I need this or why I am doing it this way. I would really just like help on the algorithm to perform this transformation (there isn't that much data so I would prefer readability over complex optimizations which reduce complexity). (Oh and I am using Java, but pseudocode would be fine). Thanks!

    Read the article

  • Is adding users to the group www-data safe on Debian?

    - by John
    Many PHP applications do self-configuration and self-updating. This requires apache to have write access to the PHP files. While chgrp'ing them all to www-data appears like a good practice to avoid making them world writable, I also wish to allow users to create new files and edit existing one. Is adding users to the group www-data safe on Debian? For example: 775 root www-data /var/www 644 john www-data /var/www/johns_php_application.php 660 john www-data /var/www/johns_php_applications_configuration_file

    Read the article

  • pl/sql Oracle syntax

    - by Paul
    I have a query in pl/sql that i need to migrate to ms sql. select count(*) from table1 t1 where (conditions1) and (conditions2) and variable = t1.column1(+) Could anyone tell me what the (+) after the column means ? (is it sort of a sum ?)

    Read the article

  • mass deploy Oracle patches with OEM with different OS's

    - by bobsmith12
    I have not been able to get this to work. We are running our OEM grid control database/oms on Red Hat 5(32 bit), but our databases are on Solaris x86-64. I could not mass deploy agents since the Operating Systems were different. When I download patches it is by OS. Is there a way to mass deploy to multiple operating systems? I have alot of databases. I was given the redhat server for OEM because it was available. We are have 10.1,10.2, and 11.1 databases. OEM DB is 10.2.0.5

    Read the article

  • Error OEM (oracle Enterprise Manager)

    - by user137520
    Dear Friends I was trying to configure OEM today. Everything seemed fine. I could login the OEM but when I tried to lauch DBA studio from there, I could not and it said no database has been discovered yet. I don't understand why since I created a database already. Has anyone experienced that? Thank You

    Read the article

  • Problems with data driven testing in MSTest

    - by severj3
    Hello, I am trying to get data driven testing to work in C# with MSTest/Selenium. Here is a sample of some of my code trying to set it up: [TestClass] public class NewTest { private ISelenium selenium; private StringBuilder verificationErrors; [DeploymentItem("GoogleTestData.xls")] [DataSource("System.Data.OleDb", "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=GoogleTestData.xls;Persist Security Info=False;Extended Properties='Excel 8.0'", "TestSearches$", DataAccessMethod.Sequential)] [TestMethod] public void GoogleTest() { selenium = new DefaultSelenium("localhost", 4444, "*iehta", http://www.google.com); selenium.Start(); verificationErrors = new StringBuilder(); var searchingTerm = TestContext.DataRow["SearchingString"].ToString(); var expectedResult = TestContext.DataRow["ExpectedTextResults"].ToString(); Here's my error: Error 3 An object reference is required for the non-static field, method, or property 'Microsoft.VisualStudio.TestTools.UnitTesting.TestContext.DataRow.get' E:\Projects\SeleniumProject\SeleniumProject\MaverickTest.cs 32 33 SeleniumProject The error is underlining the "TestContext.DataRow" part of both statements. I've really been struggling with this one, thanks!

    Read the article

  • RIA Services vs ADO.NET Data Services

    - by Cody C
    I'm currently in the process of creating a Silverlight 3 data driven application. To access the database, 2 common approaches are used: RIA Services and ADO.NET Data Services. Does anyone have any guidance on when/why to choose each approach? Here is what I've gathered from my research / experience. Any thoughts? ADO.NET seems to be only useful for strictly database calls. If you need to expose the data services to other applications (ignoring Silverlight 3's domain restriction), this is a good approach. Also, if the URL/Query syntax can be useful in your application, this is another advantage RIA Services seem to be a more flexible, accepted framework. It seems to give you more than strictly database access. It does have a limitation of only being used for the Silverlight / Web application as it is not exposed via a service. Thoughts? Ideas? Comments?

    Read the article

  • Stack data storage order

    - by Jamie Dixon
    When talking about a stack in either computing or "real" life we usually assume a "first on, last off" type of functionality. Because the idea of a stack is based around something in the physical world, does it matter how the data in the stack is stored? I notice in a lot of examples that the storage of the stack data is quite often done using an array and the newest item added to the stack is placed at the bottom of the array. (like adding a new plate to an existing stack of plates except putting it underneath the other plates rather than on top). As a paradigm, does it matter in what order the data is stored within the stack as long as the operation of the stack acts as expected?

    Read the article

  • asp.NET Dynamic Data Site and asp.NET MVC-2 site together

    - by loviji
    Hi, I have created firstly ASP.NET MVC 2. and write more functionality. After I create asp.NET Dynamic Data Site. now, when I click on run button in Visual Studio, mvc app. opened in browser as http://localhost:50062. and asp.NET Dynamic Data Site as http://localhost:58395/cms/. but i want to merge this app. in one. can I use asp.NET Dynamic Data Site and asp.NET MVC-2 at the same time?

    Read the article

  • WCF Data Services consuming data from EF based repository

    - by John Kattenhorn
    We have an existing repository which is based on EF4 / POCO and is working well. We want to add a service layer using WCF Data Services and looking for some best practice advice. So far we have developed a class which has a IQueryable property and the getter triggers the repository 'get all users' method. The problem so far have been two-fold: 1) It required us to decorate the ID field of the poco object to tell data service what field was the id. This now means that our POCO object is not 'pure'. 2) It cannot figure out the relationships between the objects (which is obvious i guess). I've now stopped this approach and i'm thinking that maybe we should expose the OBjectContext from the repository and use more 'automatic' functionality of EF. Has anybody got any advice or examples of using the repository pattern with WCF Data Services ?

    Read the article

  • ETL Operation - Return Primary Key

    - by user302254
    I am using Talend to populate a data warehouse. My job is writing customer data to a dimension table and transaction data to the fact table. The surrogate key (p_key) on the fact table is auto-incrementing. When I insert a new customer, I need my fact table to reflect the id of the related customer. As I mentioned my p_key is auto auto_incrementing so I can't just insert an arbitrary value for the p_key. Any thought on how I can insert a row into my dimension table and still retrieve the primary key to reference in my fact record? Thanks.

    Read the article

  • best way to save data in ipod touch/iphone objective-c

    - by Leonardo
    Hi all, I am writing a very simple application, for iphone. Unfortunately I am really a newbie. What I am trying to do is to save data at the end of a user experience. These data are really simple, only string or int, or some array. Later I want to be able to retrieve that data, therefore I also need an event id (I suppose). Could you please point out the best way, api or technology to achieve that, xml, plain text, serialization... ? many thanks Leonardo

    Read the article

  • Messaging Systems – Handshaking, Reconciliation and Tracking for Data Transparency

    - by Ahsan Alam
    As many corporations build business partnerships with other organizations, the need to share information becomes necessary. Large amount of data sharing using snail mail, email and/or fax are quickly becoming a thing of the past. More and more organizations are relying heavily on Ftp and/or Web Service to exchange data. Corporations apply wide range of technologies and techniques based on available resources and data transfer needs. Sometimes, it involves simple home-grown applications. Other times, large investments are made on products like BizTalk, TIBCO etc. Complexity of information management also varies significantly from one organizations to another. Some may deal with handful of simple steps to process and manage shared data; whereas others may rely on fairly complex processes with heavy interaction with internal and external systems in order to serve the business needs. It is not surprising that many of these systems end up becoming black boxes over a period of time. Consequently, people and business start to rely more and more on developers and support personnel just to extract simple information adding to the loss of productivity. One of the most important factor in any business is transparency to data irrespective of technology preferences and the complexity of business processes. Not knowing the state of data could become very costly to the business. Being involved in messaging systems for some time now, I have heard the same type of questions over and over again. Did we transmit messages successfully? Did we get responses back? What is the expected turn-around-time? Did the system experience any errors? When one company transmits data to one or more company, it may invoke a set of processes that could complete in matter of seconds, or it could days. As data travels from one organizations to another, the uncertainty grows, and the longer it takes to track uncertain state of the data the costlier it gets for the business, So, in every business scenario, it's extremely important to be aware of the state of the data.   Architects of messaging systems can take several steps to aid with data transparency. Some forms of data handshaking and reconciliation mechanism as well as extensive data tracking can be incorporated into the system to provide clear visibility to the data. What do I mean by handshaking and reconciliation? Some might consider these to be a single concept; however, I like to consider them in two unique categories. Handshaking serves as message receipts or acknowledgment. When one transmits messages to another, the receiver must acknowledge each message by sending immediate responses for each transaction. Whenever we use Web Services, handshaking is often achieved utilizing request/reply pattern. Similarly, if Ftp is used, a receiver can acknowledge by dropping messages for the sender as soon as the files are picked up. These forms of handshaking or acknowledgment informs the message sender and receiver that a successful transaction has occurred. I have mentioned earlier that it could take anywhere from a few seconds to a number of days before shared data is completely processed. In addition, whenever a batched transaction is used, processing time for each data element inside the batch could also vary significantly. So, in order to successfully manage data processing, reconciliation becomes extremely important; otherwise it may result into data loss or in some cases hefty penalty. Reconciliation can be done in many ways. Partner organizations can share and compare ad hoc reports to achieve reconciliation. On the other hand, partners can agree on some type of systematic reconciliation messages. Systems within responsible parties can trigger messages to partners as soon as the data process completes.   Next step in the data transparency is extensive data tracking. Some products such as BizTalk and TIBCO provide built-in functionality for data tracking; however, built-in functionality may not always be adequate. Sometimes additional tracking system (or databases) needs to be built in order monitor all types of data flow including, message transactions, handshaking, reconciliation, system errors and many more. If these types of data are captured, then these can be presented to business users in any forms or fashion. When business users are empowered with such information, then the reliance on developers and support teams decreases dramatically.   In today's collaborative world of information sharing, data transparency is key to the success of every business. The state of business data will constantly change. However, when people have easier access to various states of data, it allows them to make better and quicker decisions. Therefore, I feel that data handshaking, reconciliation and tracking is very important aspect of messaging systems.

    Read the article

  • Yay! Oracle Solaris 11.1 Is Here!

    - by rickramsey
    Even the critters are happy. This is no cosmetic release. It's got TONS of new stuff for both system admins and system developers. In the coming weeks and months I'll highlight specific new capabilities, but for now, here are a few resources to get you started. What's New (pdf) Describes enhancements for sysadmins in: Installation System configuration Virtualization Security and Compliance Networking Data management Kernel/platform support Network drivers User environment And for system developers: Preflight Applications Checker Oracle ExaStack Labs (available to Oracle Partner Network Gold-level members for application certification) Oracle Solaris Studio Integrated Java Virtual Machine (JVM): Updates are now managed using the Image Packaging System (IPS) Migration guides and technology mapping tables for AIX, HP-UX and Red Hat Linux: Download Free downloads for SPARC and x86 are available, along with instructions and tips for using the new repositories and Image Packaging System. Tech Article: How to Upgrade to Oracle Solaris 11.1 You can upgrade using either Oracle's official Solaris release repository or, if you have a support contract, the Support repository. Peter Dennis explains how. Documentation Superbly written instructions from our dedicated cadre of world-renowned but woefully underpaid technical writers: Getting Started Installing, Booting, and Updating Establishing an Oracle Solaris Network Administering Essential Features Administering Network Services Securing the Operating System Monitoring and Tuning Creating and Using Virtual Environments Working with the Desktop Developing Applications Reference Manuals And more Training And don't forget the new online training courses from Oracle University! I really liked them. Here are my first and second impressions. Website Newsletter Facebook Twitter

    Read the article

  • iphone tab bar controller and core data.

    - by Sway
    Ok bit of a newbie type question. I want to use Core Data, together with Tab and Navigation controllers. In XCode if I create a Navigation Based Application I get the option to choose Core Data. Whereas If I create a Tab Bar Application I don't get the choice. I understand that Tab Bars display view controllers so it kinda makes sense. However given that by default it sticks the basic Core Data code in the Application delegate I don't see why this isn't offered. At the moment I'm creating the two projects and cutting and pasting between them. Does this omission in XCode seem weird to you? Is it some sort of oversight? Thanks, Matt

    Read the article

  • Seed data for grails application

    - by bsreekanth
    Hello, What is the best way to load seed (initial or test) data into grails application. I'm considering 3 options 1. Putting everything in *BootStrap.groovy files. This is tedious if the domain classes and test data are many. 2. Write custom functionality to load it through xml. May not be too difficult with the excellent xml support by groovy, but lot of switch statements for different domain classes. 3. Use Liquibase LoadData api. I see you can load the data fairly easy from csv files. Choice 3 seems the easiest. But, I'm not familiar with Liquibase. Is it good in this scenario, or only used for migration, db changes etc. If anyone could provide a better sol, or point to an example with Liquibase, it would be great help.. thanks...

    Read the article

  • Validating Petabytes of Data with Regularity and Thoroughness

    - by rickramsey
    by Brian Zents When former Intel CEO Andy Grove said “only the paranoid survive,” he wasn’t necessarily talking about tape storage administrators, but it’s a lesson they’ve learned well. After all, tape storage is the last line of defense to prevent data loss, so tape administrators are extra cautious in making sure their data is secure. Not surprisingly, we are often asked for ways to validate tape media and the files on them. In the past, an administrator could validate the media, but doing so was often tedious or disruptive or both. The debut of the Data Integrity Validation (DIV) and Library Media Validation (LMV) features in the Oracle T10000C drive helped eliminate many of these pains. Also available with the Oracle T10000D drive, these features use hardware-assisted CRC checks that not only ensure the data is written correctly the first time, but also do so much more efficiently. Traditionally, a CRC check takes at least 25 seconds per 4GB file with a 2:1 compression ratio, but the T10000C/D drives can reduce the check to a maximum of nine seconds because the entire check is contained within the drive. No data needs to be sent to a host application. A time savings of at least 64 percent is extremely beneficial over the course of checking an entire 8.5TB T10000D tape. While the DIV and LMV features are better than anything else out there, what storage administrators really need is a way to check petabytes of data with regularity and thoroughness. With the launch of Oracle StorageTek Tape Analytics (STA) 2.0 in April, there is finally a solution that addresses this longstanding need. STA bundles these features into one interface to automate all media validation activities across all Oracle SL3000 and SL8500 tape libraries in an environment. And best of all, the validation process can be associated with the health checks an administrator would be doing already through STA. In fact, STA validates the media based on any of the following policies: Random Selection – Randomly selects media for validation whenever a validation drive in the standalone library or library complex is available. Media Health = Action – Selects media that have had a specified number of successive exchanges resulting in an Exchange Media Health of “Action.” You can specify from one to five exchanges. Media Health = Evaluate – Selects media that have had a specified number of successive exchanges resulting in an Exchange Media Health of “Evaluate.” You can specify from one to five exchanges. Media Health = Monitor – Selects media that have had a specified number of successive exchanges resulting in an Exchange Media Health of “Monitor.” You can specify from one to five exchanges. Extended Period of Non-Use – Selects media that have not had an exchange for a specified number of days. You can specify from 365 to 1,095 days (one to three years). Newly Entered – Selects media that have recently been entered into the library. Bad MIR Detected – Selects media with an exchange resulting in a “Bad MIR Detected” error. A bad media information record (MIR) indicates degraded high-speed access on the media. To avoid disrupting host operations, an administrator designates certain drives for media validation operations. If a host requests a file from media currently being validated, the host’s request takes priority. To ensure that the administrator really knows it is the media that is bad, as opposed to the drive, STA includes drive calibration and qualification features. In addition, validation requests can be re-prioritized or cancelled as needed. To ensure that a specific tape isn’t validated too often, STA prevents a tape from being validated twice within 24 hours via one of the policies described above. A tape can be validated more often if the administrator manually initiates the validation. When the validations are complete, STA reports the results. STA does not report simply a “good” or “bad” status. It also reports if media is even degraded so the administrator can migrate the data before there is a true failure. From that point, the administrators’ paranoia is relieved, as they have the necessary information to make a sound decision about the health of the tapes in their environment. About the Photograph Photograph taken by Rick Ramsey in Death Valley, California, May 2014 - Brian Follow OTN Garage on: Web | Facebook | Twitter | YouTube

    Read the article

  • WebCenter Marketing and Upcoming Events

    - by rituchhibber
    Events: Events: Date Event Name Location/Country October 30, 2012 ResCare Solves Content Lifecycle Challenges with Oracle WebCenter Webcast November 1, 2012 Paper Burying Your HR Processes? Dig Your Way Out With Oracle WebCenter! Webcast November 15, 2012 Social Business Thought Leader Webcast: Three Ways to Fix Your Broken Organization, featuring Christian Finn Webcast Marketing: Marketing: WebCenter Sites Sales eVite:Embrace the Base: Create an Exceptional Online Customer Experience with Oracle WebCenter Sites Directs recipients to the Connected Customer Experience Resource Center to see the latest demos, analyst reports, and customer webcasts promoting WebCenter Sites. For more information Click  here. WebCenter Social Business Thought Leaders Series: Digital Darwinism: How Brands Can Survive the Rapid Evolution of Society and TechnologyBrian Solis, Altimeter Group digital analyst and futuristDecember 13, 2012 10am PDTRegistration available soon, find other content from this speaker here. Webcast: WebCenter Sites for Applications: Disconnected Online Customer Experience? Connect it with Oracle WebCenter November 8, 2012  eVite | Registration Page WebCenter in Action Customer & Partner webcast series: Started earlier in FY13, a new webcast series featuring WebCenter customer deployments that are executed by a partner.The next webcast in the series will be November 14th:Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter Click here to learn more. OnDemand Webcast: ResCare Solves Content Lifecycle Challenges with Oracle WebCenterComplex documents must be created, assembled, reviewed, and tracked. To avoid fragmented, chaotic information processes, organizations must adopt an integrated set of strategies, standards, best practices, and technologies for managing information. Attend this webcast to learn how Oracle WebCenter has allowed ResCare to: solve content lifecycle challenges, reduce compliance and business risks and increase adoption of intranet as primary business communication tool. On-Demand Assets Date Event Name Location/Country On Demand Avoid Social Media Fatigue - Learn the 9 C’s of Customer Engagement, featuring Ray Wang, Principal Analyst and CEO, Constellation Research Webcast On Demand WebCenter in Action Series: Hitachi Data Systems Improves Global Web Experience with Oracle WebCenter, presented by Hitachi Data Systems and Lingotek. Webcast On Demand Managing Social Relationships for the Enterprise, featuring Jeremiah Owyang, Industry Analyst, Altimeter Group and Reggie Bradford, Vice President, Oracle Webcast On Demand Oracle’s Vision for the Social-Enabled Enterprise, presented by Mark Hurd, Thomas Kurian and Reggie Bradford Webcast On Demand WebCenter in Action Series: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter, presented by Qualcomm and Keste. Webcast On Demand Social Business Thought Leaders Series: 6 Counterintuitive Best Practices for Social Collaboration Adoption, featuring John Brunswick, Oracle. Webcast On Demand Oracle WebCenter Connects Patients and Researchers in Cancer Control Mission, presented by Canadian Partnership Against Cancer and App-Systems Webcast On Demand Oracle WebCenter: Modernize, Aggregate and Extend Your Portals Webcast On Demand Top 10 Technology Trends Driving Business Innovation, featuring Andy Mulholland, CTO, Capgemini Webcast On Demand Ancestry.com Helps Families Uncover History with Oracl e WebCenter Webcast On Demand Organic Business Networks: Doing Business in a Hyper-Connected World, featuring Mike Fauscette, GVP, IDC Webcast On Demand Social Business and Innovation, featuring John Mancini, President, AIIM Webcast On Demand Do More with Oracle WebCenter: Expand Beyond Web Experience Management Webcast On Demand Race Against the Machine, featuring Andrew McAfee, author and principal scientist at MIT Webcast On Demand Introducing Oracle WebCenter Sites 11gR1: Transforming the Online Experience Webcast On Demand Mobile is the New Face of Engagement, featuring Ted Schadler, Vice President & Principal Analyst, Forrester Research Inc Webcast Analyst Report: IDC Research: Oracle Debuts New Release of Oracle WebCenter Sites.

    Read the article

  • Automating scraping of table data to XML

    - by thewinchester
    Problem I have a YQL query result that I'm trying to get converted and sort into a clean XML file. Background Being the pains that they are, information from the World Cup isn't freely available in an easy to reuse format. So, after a bit of finessing with YQL I have managed to liberate the required table rows which contain the data I'm after. The YQL query can be viewed at: http://query.yahooapis.com/v1/public/yql/ravingbeefsteak/worldcup2010groupliberator?diagnostics=true I'd like to now convert this information into XML, and being an absolute n00b I don't know where to start or what to look for. I'm also needing to do a find and replace on the data to get the URL's working as they should without manual changes, and hopefully an initial sorting of the data. If anyone can point me in the right direction of what I need to be doing to make my needs a reality it would be greatly appreciated.

    Read the article

  • Sorting NSSets of a core data entity - Objective-c

    - by ncohen
    Hi everyone, I would like to sort the data of a core data NSSet (I know we can do it only with arrays but let me explain...). I have an entity user who has a relationship to-many with the entity recipe. A recipe has the attributes name and id. I would like to get the data such that: Code: NSArray *id = [[user.recipes valueForKey:@"identity"] allObjects]; NSArray *name = [[user.recipes valueForKey:@"name"] allObjects]; if I take the object at index 1 in both arrays, they correspond to the same recipe... Thanks

    Read the article

  • Interpolation of scattered data: What could I do?

    - by Simon
    Hi! I need your help. I'm working on a 3D chart in Java using Java 3D. It should be able to display a bunch of measured values. As measured, the data I get is scattered. This means I will have to interpolate the missing points in order to get my surface plotted nicely. I didn't study all that 3D-Geometry stuff yet and I don't know where to start. My idea is to triangulate the points to a surface and then, based on the triangulation, interpolate the missing points. (see this to have a rough idea of what I want to achieve) Does someone have experiences with the interpolation of scattered data? Is my approach the right one? If yes, what kind of data structures and algorithms will I need in order to triangulate my points cloud?

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >