Search Results

Search found 21434 results on 858 pages for 'query master'.

Page 417/858 | < Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >

  • Entity Framework: How to bind related products

    - by Waheed
    I am using the following Linq query from p in Product.Include("ProductDetails.Colors") where p.ProductID.Equals(18) select p; And then using the result of this query to bind the GridView. The productDetails are bind to the grid fine, but the Colors are not bind. To bind colors i am using <%# Eval("Colors.CategoryName") %. Error is " Field or property with the name 'Colors.CategoryName' was not found on the selected data source." But in a loop i am getting this fine. foreach (ProductDetails proDet in pd.ProductDetails) { string bar = proDet.BarCode; string color = proDet.Colors.CategoryName; }

    Read the article

  • Folder.Bind - "Id is malformed" - Exchange Web Services Managed API

    - by Michael Shimmins
    I'm passing the Folder.Id.UniqueId property of a folder retrieved from a FindFolders query via the query string to another page. On this second page I want to use that UniqueId to bind to the folder to list its mail items: string parentFolderId = Request.QueryString["id"]; ... Folder parentFolder = Folder.Bind(exchangeService, parentFolderId); // do something with parent folder When I run this code it throws an exception telling me the Id is manlformed. I thought maybe it needs to be wrapped in a FolderId object: Folder parentFolder = Folder.Bind(exchangeService, new FolderId(parentFolderId)); Same issue. I've been searching for a while, and have found some suggestions about Base64/UTF8 conversion, but again that did not solve the problem. Anyone know how to bind to a folder with a given unique id?

    Read the article

  • PHP PDO Related: Update SQL Statement not Updating the content of Database

    - by Rachel
    I am trying to implement update statement using prepared statement in php script but it appears that it is not update record in the database and am not sure why and so would appreciate if you can share some insights. Code $query = "UPDATE DatTable SET DF_PARTY_ID = :party_id, DF_PARTY_CODE = :party_code, DF_CONNECTION_ID = :connection_id WHERE DF_PARTY_ID = ':party_id'"; $stmt = $this->connection->prepare($query); $stmt->bindValue(':party_id', $data[0], PDO::PARAM_INT); $stmt->bindValue(':party_code', $data[1], PDO::PARAM_INT); $stmt->bindValue(':connection_id', $data[2], PDO::PARAM_INT); $stmt->execute(); Inspiring Solution leading to this approach. Any Suggestions ?

    Read the article

  • Ado.net performance:What does SNIReadSync do?

    - by Beatles1692
    We have a query that takes 2 seconds to run in Sql Server Management Studio but it takes 13 seconds to be shown on a client screen. I used dotTrace to profile my source code and noticed there is this SNIReadSync method (part of ADO.net assemblies)that takes a lot of time to do its job(9 seconds).I ran my source over server so I could omit the network effects and the result was the same. It doesn't matter if I'm using OleDBConnection or SqlConnection. It doesn't matter if I'm using a DataReader or a DataSet. Connection pooling does not solve this issue(as my result shows). I googled this issue and I couldn't find an answer to the question that what this method is actually doing and how we can improve it. here's what I found on StakOverFlow that's not helpful either: http://stackoverflow.com/questions/1610874/snireadsync-executing-between-120-500-ms-for-a-simple-query-what-do-i-look-for

    Read the article

  • nhibernate activerecord linq Contains problem

    - by Robert Ivanc
    Hi, I am having problems with the following query in Castle ActiveRecord 2.12: var q = from o in SodisceFMClientVAR.Queryable where taxnos2.Contains(o.TaxFileNo) select o; taxNos2 is an array of strings. When run I get an exception: + InnerException {"Index was out of range. Must be non-negative and less than the size of the collection.\r\nParameter name: index"} System.Exception {System.ArgumentOutOfRangeException} StackTrace " at Castle.ActiveRecord.ActiveRecordBase.ExecuteQuery(IActiveRecordQuery query)\r\n at Castle.ActiveRecord.Linq.LinqResultWrapper`1.Populate()\r\n at Castle.ActiveRecord.Linq.LinqResultWrapper`1.GetEnumerator()\r\n at NHibernate.Linq.Query`1.GetEnumerator()\r\n at System.Linq.Buffer`1..ctor(IEnumerable`1 source)\r\n at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source)\r\n at prosoft.skb.insolventnostDataAccess.InsolventnostDataAccAR.GetOurUsersListLS(ICollection`1 taxNos) in C:\\svn\\skb\\insolventnostWithAR\\prosoft.skb.insolventnostDataAccess\\InsolventnostDataAR.cs:line 214\r\n at prosoft.skb.insolventnostDataFromWS.InsolventnostFromWS.filterByOurUsers(IEnumerable`1 odprtiPostopki) in C:\\svn\\skb\\insolventnostWithAR\\prosoft.skb.insolventnostDataFromWS\\InsolventnostFromWS.cs:line 237\r\n at prosoft.skb.insolventnostDataFromWS.InsolventnostFromWS.SyncData() in C:\\svn\\skb\\insolventnostWithAR\\prosoft.skb.insolventnostDataFromWS\\InsolventnostFromWS.cs:line 53" string Does Contains even work in linq for nhibernate? I couldn't find anything via google... Is there a workaround? Thanks!

    Read the article

  • Facebook FQL Question

    - by Michael
    I'm trying to use the Facebook Javascript API to run FQL queries, and it works fine if I try and get users by username or uid, but doesn't work when I'm searching by name. function get_username() { var name = prompt("Enter name: ") FB.api( { method: 'fql.query', query: 'SELECT username FROM user WHERE name in "'+name+'"' }, function(response) { var x = response[0].username alert('Username is ' + x); } ); } I realize that this will probably return multiple users, but I can't figure out how to tell if it's returning multiple users or no users at all, it seems to freeze after trying to get response[0].username. I'm probably making a beginner mistake but any ideas?

    Read the article

  • Linq-to-SQL: Ignore null parameters from WHERE clause

    - by Peter Bridger
    The query below should return records that either have a matching Id supplied in ownerGroupIds or that match ownerUserId. However is ownerUserId is null, I want this part of the query to be ignored. public static int NumberUnderReview(int? ownerUserId, List<int> ownerGroupIds) { return ( from c in db.Contacts where c.Active == true && c.LastReviewedOn <= DateTime.Now.AddDays(-365) && ( // Owned by user !ownerUserId.HasValue || c.OwnerUserId.Value == ownerUserId.Value ) && ( // Owned by group ownerGroupIds.Count == 0 || ownerGroupIds.Contains( c.OwnerGroupId.Value ) ) select c ).Count(); } However when a null is passed in for ownerUserId then I get the following error: Nullable object must have a value. I get a tingling I may have to use a lambda expression in this instance?

    Read the article

  • Lessons from a SAN Failure

    - by Bill Graziano
    At 1:10AM Sunday morning the main SAN at one of my clients suffered a “partial” failure.  Partial means that the SAN was still online and functioning but the LUNs attached to our two main SQL Servers “failed”.  Failed means that SQL Server wouldn’t start and the MDF and LDF files mostly showed a zero file size.  But they were online and responding and most other LUNs were available.  I’m not sure how SANs know to fail at 1AM on a Saturday night but they seem to.  From a personal standpoint this worked out poorly: I was out with friends and after more than a few drinks.  From a work standpoint this was about the best time to fail you could imagine.  Everything was running well before Monday morning.  But it was a long, long Sunday.  I started tipsy, got tired and ended up hung over later in the day. Note to self: Try not to go out drinking right before the SAN fails. This caught us at an interesting time.  We’re in the process of migrating to an entirely new set of servers so some things were partially moved.  This made it difficult to follow our procedures as cleanly as we’d like.  The benefit was that we had much better documentation of everything on the server.  I would encourage everyone to really think through the process of implementing your DR plan and document as much as possible.  Following a checklist is much easier than trying to remember at night under pressure in a hurry after a few drinks. I had a series of estimates on how long things would take.  They were accurate for any single server failure.  They weren’t accurate for a SAN failure that took two servers down.  This wasn’t bad but we should have communicated better. Don’t forget how many things are outside the database.  Logins, linked servers, DTS packages (yikes!), jobs, service broker, DTC (especially DTC), database triggers and any objects in the master database are all things you need backed up.  We’d done a decent job on this and didn’t find significant problems here.  That said this still took a lot of time.  There were many annoyances as a result of this.  Small settings like a login’s default database had a big impact on whether an application could run.  This is probably the single biggest area of concern when looking to recreate a server.  I’d encourage everyone to go through every single node of SSMS and look for user created objects or settings outside the database. Script out your logins with the proper SID and already encrypted passwords and keep it updated.  This makes life so much easier.  I used an approach based on KB246133 that worked well.  I’ll get my scripts posted over the next few days. The disaster can cause your DR process to fail in unexpected ways.  We have a job that scripts out all logins and role memberships and writes it to a file.  This runs on the DR server and pulls from the production server.  Upon opening the file I found that the contents were a “server not found” error.  Fortunately we had other copies and didn’t need to try and restore the master database.  This now runs on the production server and pushes the script to the DR site.  Soon we’ll get it pushed to our version control software. One of the biggest challenges is keeping your DR resources up to date.  Any server change (new linked server, new SQL Server Agent job, etc.) means that your DR plan (and scripts) is out of date.  It helps to automate the generation of these resources if possible. Take time now to test your database restore process.  We test ours quarterly.  If you have a large database I’d also encourage you to invest in a compressed backup solution.  Restoring backups was the single larger consumer of time during our recovery. And yes, there’s a database mirroring solution planned in our new architecture. I didn’t have much involvement in things outside SQL Server but this caused many, many things to change in our environment.  Many applications today aren’t just executables or web sites.  They are a combination of those plus network infrastructure, reports, network ports, IP addresses, DTS and SSIS packages, batch systems and many other things.  These all needed a little bit of attention to make sure they were functioning properly. Profiler turned out to be a handy tool.  I started a trace for failed logins and kept that running.  That let me fix a number of problems before people were able to report them.  I also ran traces to capture exceptions.  This helped identify problems with linked servers. Overall the thing that gave me the most problem was linked servers.  In order for a linked server to function properly you need to be pointed to the right server, have the proper login information, have the network routes available and have MSDTC configured properly.  We have a lot of linked servers and this created many failure points.  Some of the older linked servers used IP addresses and not DNS names.  This meant we had to go in and touch all those linked servers when the servers moved.

    Read the article

  • Pylons FormEncode @validate decorator pass parameters into re-render action

    - by joelbw
    I am attempting to use the validate decorator in Pylons with FormEncode and I have encountered an issue. I am attempting to validate a form on a controller action that requires parameters, and if the validation fails, the parameters aren't passed back in when the form is re-rendered. Here's an example. def question_set(self, id): c.question_set = meta.Session.query(QuestionSet).filter_by(id=id).first() c.question_subjects = meta.Session.query(QuestionSubject).order_by(QuestionSubject.name).all() return render('/derived/admin/question_set.mako') This is the controller action that contains my form. The form will add questions to an existing question set, which is identified by id. My add question controller action looks like this: @validate(schema=QuestionForm(), form='question_set', post_only=True) def add_question(self): stuff... Now, if the validation fails FormEncode attempts to redisplay the question_set form, but it does not pass the id parameter back in, so the question set form will not render. Is it possible to pass the id back in with the @validate decorator, or do I need to use a different method to achieve what I am attempting to do?

    Read the article

  • New Article: SharePoint 2010 for Developers &ndash; Whats new?

    - by Sahil Malik
    SharePoint 2010 Training: more information This is an nice overview/beginners article about what is new in SharePoint 2010 from purely a developer point of view. Excerpt - “In some ways SharePoint 2007 was a brand new incarnation of the SharePoint product. For the very first time, ASP.NET 2.0 was applied properly to the product. Things such as master pages, membership providers, sitemap providers etc. were used heavily in SharePoint. As a result, SharePoint 2007 got a whole new developer story to it. But in some ways it was a first version of a big product, so the development story left us wanting for more. Wanting for more because in some ways the API wasn’t ideal, and most certainly the development tools were somewhere between non-existent to bad. Diagnosing SharePoint errors was another frustrating story many have endured. What has changed in SharePoint 2010? Let’s find out.” Read full article ....

    Read the article

  • This Week In Geek History: Steve Jobs Demos the First Mac, Mythbusters Hits the Airwaves, and Dr. Strangelove Invades Popular Culture

    - by Jason Fitzpatrick
    It was quite a wild ride for this week in Geek History: Steve Jobs gave a demonstration of the first Macintosh computer, beloved geek show MythBusters took to the air, and iconic movie Dr. Strangelove appeared in theatres and our collective consciousness. Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • How to Edit rows in DataGridView?

    - by DanSogaard
    So I've a button on my frmMain that opens up another frmEdit which has a datagridview that displays the following query: BindingSource bs = new BindingSource(); string sqlqry = "select p_Name from Items where p_Id=" + p_Id; SqlCeCommand cmd = new SqlCeCommand(sqlqry1, conn); SqlCeDataReader rdr = cmd.ExecuteReader(); bs.DataSource = rdr; dataGridView1.DataSource = bs; this.ShowDialog(parent); Now when frmEdit loads up the dgv displays the query just fine, but I can't Edit. I tried dgv.BeginEdit(true), but it doesn't work. The EditMode works fine if I used wizard to bind datasources to the dgv, but I need to execute my own customized queries and be able to update them directly.

    Read the article

  • Invoke an AsyncController Action from within another Controller Action?

    - by Luis
    Hi, I'd like to accomplish the following: class SearchController : AsyncController { public ActionResult Index(string query) { if(!isCached(query)) { // here I want to asynchronously invoke the Search action } else { ViewData["results"] = Cache.Get("results"); } return View(); } public void SearchAsync() { // some work Cache.Add("results", result); } } I'm planning to make an AJAX 'ping' from the client in order to know when the results are available, and then display them. But I don't know how to invoke the asynchronous Action in an asynchronous way! Thank you very much. Luis

    Read the article

  • Delphi TBytesField - How to see the text properly - Source is HIT OLEDB AS400

    - by myitanalyst
    We are connecting to a multi-member AS400 iSeries table via HIT OLEDB and HIT ODBC. You connect to this table via an alias to access a specific multi-member. We create the alias on the AS400 this way: CREATE ALIAS aliasname FOR table(membername) We can then query each member of the table this way: SELECT * FROM aliasname We are testing this in Delphi6 first, but will move it to D2010 later We are using HIT OLEDB for the AS400. We are pulling down records from a table and the field is being seen as a tBytesField. I have also tried ODBC driver and it sees as tBytesField as well. Directly on the AS400 I can query the data and see readable text. I can use the iSeries Navigation tool and see readable text as well. However when I bring it down to the Delphi client via the HIT OLEDB or HIT ODBC and try to view via asString then I just see unreadable text.. something like this: ñðð@ðõñððððñ÷@õôððõñòøóóöøñðÂÁÕÒ@ÖÆ@ÁÔÅÙÉÃÁ@@@@@@@@ÂÈÙÉâãæÁðòñè@ÔK@k@ÉÕÃK@@@@@@@@@ç I jumbled up the text above, but that is the character types that show up. When I did a test in D2010 the text looks like japanse or chinese characters, but if I display as AnsiString then it looks like what it does in Delphi 6. I am thinking this may have something to do with code pages or character sets, but I have no experience in this are so it is new to me if it is related. When I look at the Coded Character Set on the AS400 it is set to 65535. What do I need to do to make this text readable? We do have a third party component (Delphi400) that makes things behave in a more native AS400 manner. When I use its AS400 connection and AS400 query components it shows the field as a tStringField and displays just fine. BUT we are phasing out this product (for a number of reasons) and would really like the OLEDB with the ADO components work. Just for clarification the HIT OLEDB with tADOQuery do have some fields showing as tStringFields for many of the other tables we use... not sure why it is showing as a tBytesField in this case. I am not an AS400 expert, but looking at the field definititions on the AS400 the ones showing up as tBytesField look the same as the ones showing up as tStringFields... but there must be a difference. Maybe due to being a multi-member? So... does anyone have any guidance on how to get the correct string data that is readable? If you need more info please ask. Greg

    Read the article

  • Oracle sql: using bind variable for dates..

    - by user333747
    Here is a simple working query without bind variables: select * from table1 where time_stamp sysdate - INTERVAL '1' day; where time_stamp is of type DATE. I should be able to input any number of days in the above query using bind variable. So I tried the following and does not seem to work: select * from table1 where time_stamp sysdate - INTERVAL :days day; I tried entering the numeric input both as 10 and '10',for eg. You get ORA-00933 error on 10g.

    Read the article

  • "Executing SQL directly; no cursor" error when using SCOPE_IDENTITY/IDENT_CURRENT

    - by Chris
    There wasn't much on google about this error, so I'm askin here. I'm switching a PHP web application from using MySQL to SQL Server 2008 (using ODBC, not php_mssql). Running queries or anything else isn't a problem, but when I try to do scope_identity (or any similar functions), I get the error "Executing SQL directly; no cursor". I'm doing this immediately after an insert, so it should still be in scope. Running the same insert statement then query for the insert ID works fine in SQL Server Management Studio. Here's my code right now (everything else in the database wrapper class works fine for other queries, so I'll assume it isn't relevant right now): function insert_id(){ return $this->query_first("SELECT SCOPE_IDENTITY() as insert_id"); } query_first being a function that returns the first result from the first field of a query (basically the equivalent of execute_scalar() on .net). The full error message: Warning: odbc_exec() [function.odbc-exec]: SQL error: [Microsoft][SQL Server Native Client 10.0][SQL Server]Executing SQL directly; no cursor., SQL state 01000 in SQLExecDirect in C:[...]\Database_MSSQL.php on line 110

    Read the article

  • Mouse over effect with jQuery in richfaces datatable and datascroller combo

    - by John
    Hi, I'm problem with defining a mouse over effect for my datatables. I have <a4j:form> <rich:dataTable id="dataTable"> ... </rich:dataTable> <rich:datascroller id="dataScroller" for="dataTable" /> </a4j:form> <rich:jQuery selector="#dataTable tr" query="mouseover(function(){jQuery(this).addClass('active-row')})"/> <rich:jQuery selector="#dataTable tr" query="mouseout(function(){jQuery(this).removeClass('active-row')})"/> which are working fine on the very first page. However if I use the datascroller to goto another page, the mouseover effect is gone. I've tried reRendering the table or the jQuery components, that didn't help with the problem at all. Any suggestion on how I can get this working? Thanks!

    Read the article

  • How does one check if a table exists in an Android SQLite database?

    - by camperdave
    I have an android app that needs to check if there's already a record in the database, and if not, process some things and eventually insert it, and simply read the data from the database if the data does exist. I'm using a subclass of SQLiteOpenHelper to create and get a rewritable instance of SQLiteDatabase, which I thought automatically took care of creating the table if it didn't already exist (since the code to do that is in the onCreate(...) method). However, when the table does NOT yet exist, and the first method ran upon the SQLiteDatabase object I have is a call to query(...), my logcat shows an error of "I/Database(26434): sqlite returned: error code = 1, msg = no such table: appdata", and sure enough, the appdata table isn't being created. Any ideas on why? I'm looking for either a method to test if the table exists (because if it doesn't, the data's certainly not in it, and I don't need to read it until I write to it, which seems to create the table properly), or a way to make sure that it gets created, and is just empty, in time for that first call to query(...)

    Read the article

  • java.sql.SQLException: SQL logic error or missing database

    - by Sunil Kumar Sahoo
    Hi All, I ahve created database connection with SQLite using JDBC in java. My sql statements execute properly. But sometimes I get the following error while i use conn.commit() java.sql.SQLException: SQL logic error or missing database Can anyone please help me how to avoid this type of problem. Can anyone give me better approach of calling JDBC programs Class.forName("org.sqlite.JDBC"); conn = DriverManager.getConnection("jdbc:sqlite:/home/Data/database.db3"); conn.setAutoCommit(false); String query = "Update Chits set BlockedForChit = 0 where ServerChitID = '" + serverChitId + "' AND ChitGatewayID = '" + chitGatewayId + "'"; Statement stmt = null; try { stmt.execute(query); conn.commit(); stmt.close(); stmt = null; } Thanks Sunil Kumar Sahoo

    Read the article

  • Oracle Enterprise Data Quality: Ever Integration-ready

    - by Mala Narasimharajan
    It is closing in on a year now since Oracle’s acquisition of Datanomic, and the addition of Oracle Enterprise Data Quality (EDQ) to the Oracle software family. The big move has caused some big shifts in emphasis and some very encouraging excitement from the field.  To give an illustration, combined with a shameless promotion of how EDQ can help to give quick insights into your data, I did a quick Phrase Profile of the subject field of emails to the Global EDQ mailing list since it was set up last September. The results revealed a very clear theme:   Integration, Integration, Integration! As well as the important Siebel and Oracle Data Integrator (ODI) integrations, we have been asked about integration with a huge variety of Oracle applications, including EBS, Peoplesoft, CRM on Demand, Fusion, DRM, Endeca, RightNow, and more - and we have not stood still! While it would not have been possible to develop specific pre-integrations with all of the above within a year, we have developed a package of feature-rich out-of-the-box web services and batch processes that can be plugged into any application or middleware technology with ease. And with Siebel, they work out of the box. Oracle Enterprise Data Quality version 9.0.4 includes the Customer Data Services (CDS) pack – a ready set of standard processes with standard interfaces, to provide integrated: Address verification and cleansing  Individual matching Organization matching The services can are suitable for either Batch or Real-Time processing, and are enabled for international data, with simple configuration options driving the set of locale-specific dictionaries that are used. For example, large dictionaries are provided to support international name transcription and variant matching, including highly specialized handling for Arabic, Japanese, Chinese and Korean data. In total across all locales, CDS includes well over a million dictionary entries.   Excerpt from EDQ’s CDS Individual Name Standardization Dictionary CDS has been developed to replace the OEM of Informatica Identity Resolution (IIR) for attached Data Quality on the Oracle price list, but does this in a way that creates a ‘best of both worlds’ situation for customers, who can harness not only the out-of-the-box functionality of pre-packaged matching and standardization services, but also the flexibility of OEDQ if they want to customize the interfaces or the process logic, without having to learn more than one product. From a competitive point of view, we believe this stands us in good stead against our key competitors, including Informatica, who have separate ‘Identity Resolution’ and general DQ products, and IBM, who provide limited out-of-the-box capabilities (with a steep learning curve) in both their QualityStage data quality and Initiate matching products. Here is a brief guide to the main services provided in the pack: Address Verification and Standardization EDQ’s CDS Address Cleaning Process The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically: Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API Optimization of address verification by pre-standardizing data where required Formatting of output addresses into the input address fields normally used by applications Adding descriptions of the address verification and geocoding return codes The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.     Duplicate Prevention Unlike Informatica Identity Resolution (IIR), EDQ uses stateless services for duplicate prevention to avoid issues caused by complex replication and synchronization of large volume customer data. When a record is added or updated in an application, the EDQ Cluster Key Generation service is called, and returns a number of key values. These are used to select other records (‘candidates’) that may match in the application data (which has been pre-seeded with keys using the same service). The ‘driving record’ (the new or updated record) is then presented along with all selected candidates to the EDQ Matching Service, which decides which of the candidates are a good match with the driving record, and scores them according to the strength of match. In this model, complex multi-locale EDQ techniques can be used to generate the keys and ensure that the right balance between performance and matching effectiveness is maintained, while ensuring that the application retains control of data integrity and transactional commits. The process is explained below: EDQ Duplicate Prevention Architecture Note that where the integration is with a hub, there may be an additional call to the Cluster Key Generation service if the master record has changed due to merges with other records (and therefore needs to have new key values generated before commit). Batch Matching In order to allow customers to use different match rules in batch to real-time, separate matching templates are provided for batch matching. For example, some customers want to minimize intervention in key user flows (such as adding new customers) in front end applications, but to conduct a more exhaustive match on a regular basis in the back office. The batch matching jobs are also used when migrating data between systems, and in this case normally a more precise (and automated) type of matching is required, in order to minimize the review work performed by Data Stewards.  In batch matching, data is captured into EDQ using its standard interfaces, and records are standardized, clustered and matched in an EDQ job before matches are written out. As with all EDQ jobs, batch matching may be called from Oracle Data Integrator (ODI) if required. When working with Siebel CRM (or master data in Siebel UCM), Siebel’s Data Quality Manager is used to instigate batch jobs, and a shared staging database is used to write records for matching and to consume match results. The CDS batch matching processes automatically adjust to Siebel’s ‘Full Match’ (match all records against each other) and ‘Incremental Match’ (match a subset of records against all of their selected candidates) modes. The Future The Customer Data Services Pack is an important part of the Oracle strategy for EDQ, offering a clear path to making Data Quality Assurance an integral part of enterprise applications, and providing a strong value proposition for adopting EDQ. We are planning various additions and improvements, including: An out-of-the-box Data Quality Dashboard Even more comprehensive international data handling Address search (suggesting multiple results) Integrated address matching The EDQ Customer Data Services Pack is part of the Enterprise Data Quality Media Pack, available for download at http://www.oracle.com/technetwork/middleware/oedq/downloads/index.html.

    Read the article

  • TSQL Quiz 2011 on beyondrelational.com

    - by Jalpesh P. Vadgama
    One of the my friend Jacob Sebastian running a SQL Server TSQL quiz on his site beyondrelational.com. This is a great opportunity to learn TSQL and win great price Like Apple IPad and other lots of cool stuff. So if you are expert and if you learning TSQL then its a great way to test your knowledge. For whole month of march selected quiz master will ask a question and you have to answer all this question day by day and at the end of month you will have great chance to win Apple Ipad. For more details you can visit following link: http://beyondrelational.com/quiz/SQLServer/TSQL/2011/default.aspx Hope you liked it.Stay tuned for more..

    Read the article

  • Arguments of using WCF/OData as access layer instead of EF/L2S/nHibernate directly

    - by Carl Hörberg
    We develop mostly low traffic but highly specialized web applications. Normally we use L2S, EF or nHibernate as access layer and then throws Asp.Net MVC to it and in which for normal crud operations we query the ISession/DataContext directly but for more advanced functions/side effects we put it in a some kind of service layer. Now, i was think about publishing the data through OData (WCF Data Service) and query that from the controllers (or even from jQuery when the a good template engine shows up) and publish the service operations through a WCF service (or as custom methods on the WCF Data Service?). What advantages/disadvantages does this architecture poses? Do I gain something except higher complexity and latency? Better separations of concerns (or is it just a illusion)?

    Read the article

  • PHP - can a method return a pointer?

    - by Kerry
    I have a method in a class trying to return a pointer: <?php public function prepare( $query ) { // bla bla bla return &$this->statement; } ?> But it produces the following error: Parse error: syntax error, unexpected '&' in /home/realst34/public_html/s98_fw/classes/sql.php on line 246 This code, however, works: <?php public function prepare( $query ) { // bla bla bla $statement = &$this->statement; return $statement; } ?> Is this just the nature of PHP or am I doing something wrong?

    Read the article

  • How to use Enum as NamedQuery parameters in JPA

    - by n002213f
    I have an Entity with a enum attribute and a couple on NamedQueries. One of these NamedQueries has a the enum attribute as a parameter i.e. SELECT m FROM Message m WHERE status = :status When i try to ru n the query i get the following error; Caused by: java.lang.IllegalArgumentException: You have attempted to set a value of type class my.package.Status for parameter status with expected type of class my.package.Status from query string SELECT m FROM Message m WHERE m.status = :status. I'm using Toplink How is this? How would i make JPA happy?

    Read the article

< Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >