Search Results

Search found 70568 results on 2823 pages for 'ef database first'.

Page 80/2823 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • What are the pros of switching DNS names with a database server hardware upgrade?

    - by wilbbe01
    When we upgrade to new hardware at work we usually increment a number in the DNS name. For example. We have a server called database-2, that is slated to become database-3 in the coming days. I haven't been able to find a good reason why this is good behavior. To me the work of trying to catch all end user machines, as well as all servers dependent on the database server is far riskier than simply moving the database and ip/name with it to the new hardware. A little over a year ago we spent several months of requests coming in, as infrequent users began using software that needed to be updated to point to a new DNS name. I am struggling to find answers as to why this is a good practice. So the question. Why is using DNS names as a "server hardware version identifier" a good idea? What am I overlooking? Thanks much.

    Read the article

  • ApplyCurrentValues in EF 4

    - by ali62b
    I just was playing with EF 4 in VS 2010 RC and just found that ApplyCurrentValues dont work when the Property is of type bool and the newly value is false !!!??? and it works when the newly value is true . I dont know if this is a bug or I'm missing something but I just work with a very ugly work around : public void UpdateProduct(Product updatedProduct) { using (model) { model.Products.Attach(new Product { ProductID = updatedProduct.ProductID }); model.Products.ApplyCurrentValues(updatedProduct); Product originalProduct = model.Products.Single(p => p.ProductID == updatedProduct.ProductID); originalProduct.Discontinued = updatedProduct.Discontinued; model.SaveChanges(); } } any idea or better work around?

    Read the article

  • WCF Data Services consuming data from EF based repository

    - by John Kattenhorn
    We have an existing repository which is based on EF4 / POCO and is working well. We want to add a service layer using WCF Data Services and looking for some best practice advice. So far we have developed a class which has a IQueryable property and the getter triggers the repository 'get all users' method. The problem so far have been two-fold: 1) It required us to decorate the ID field of the poco object to tell data service what field was the id. This now means that our POCO object is not 'pure'. 2) It cannot figure out the relationships between the objects (which is obvious i guess). I've now stopped this approach and i'm thinking that maybe we should expose the OBjectContext from the repository and use more 'automatic' functionality of EF. Has anybody got any advice or examples of using the repository pattern with WCF Data Services ?

    Read the article

  • Arguments of using WCF/OData as access layer instead of EF/L2S/nHibernate directly

    - by Carl Hörberg
    We develop mostly low traffic but highly specialized web applications. Normally we use L2S, EF or nHibernate as access layer and then throws Asp.Net MVC to it and in which for normal crud operations we query the ISession/DataContext directly but for more advanced functions/side effects we put it in a some kind of service layer. Now, i was think about publishing the data through OData (WCF Data Service) and query that from the controllers (or even from jQuery when the a good template engine shows up) and publish the service operations through a WCF service (or as custom methods on the WCF Data Service?). What advantages/disadvantages does this architecture poses? Do I gain something except higher complexity and latency? Better separations of concerns (or is it just a illusion)?

    Read the article

  • SQL SERVER – Find First Non-Numeric Character from String

    - by pinaldave
    It is fun when you have to deal with simple problems and there are no out of the box solution. I am sure there are many cases when we needed the first non-numeric character from the string but there is no function available to identify that right away. Here is the quick script I wrote down using PATINDEX. The function PATINDEX exists for quite a long time in SQL Server but I hardly see it being used. Well, at least I use it and I am comfortable using it. Here is a simple script which I use when I have to identify first non-numeric character. -- How to find first non numberic character USE tempdb GO CREATE TABLE MyTable (ID INT, Col1 VARCHAR(100)) GO INSERT INTO MyTable (ID, Col1) SELECT 1, '1one' UNION ALL SELECT 2, '11eleven' UNION ALL SELECT 3, '2two' UNION ALL SELECT 4, '22twentytwo' UNION ALL SELECT 5, '111oneeleven' GO -- Use of PATINDEX SELECT PATINDEX('%[^0-9]%',Col1) 'Position of NonNumeric Character', SUBSTRING(Col1,PATINDEX('%[^0-9]%',Col1),1) 'NonNumeric Character', Col1 'Original Character' FROM MyTable GO DROP TABLE MyTable GO Here is the resultset: Where do I use in the real world – well there are lots of examples. In one of the future blog posts I will cover that as well. Meanwhile, do you have any better way to achieve the same. Do share it here. I will write a follow up blog post with due credit to you. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL String, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to perform insert and update with Ado.net dataservices (EF and Inheritance)

    - by Thurein
    Hi, I have an entity model, in which I have a table per type inheritance. There are 3 types, first, Contact, which I defined as abstract in my EF model and the rest are Company and person types which are derived from contact type. Is it possible to perform an insert using ado.net dataservice and asp.net ajax library? I was trying the following client code : dataContext.insertEntity(person, "Contacts"); I was getting this response from server : Error processing request stream. Type information must be specified for types that take part in inheritance. Thanks.

    Read the article

  • First Look - Oracle Data Mining

    - by kimberly.billings
    In his blog, JT on EDM, James Taylor shares his analysis of Oracle Data Mining, including its new GUI and Exadata integration. While Oracle Data Mining has been available for a while, it is now easier to access and try via the Amazon Cloud. Using the Oracle 11gR2 Data Mining Amazon Machine Image (AMI), you can launch an Oracle Data Mining-enabled instance directly through Amazon Web Services (AWS) and connect to it using the Oracle Data Miner graphical user interface. The new Oracle Data Mining GUI, which will be available to beta customers soon, provides more graphics, the ability to define, save and share analytical "work flows" to solve business problems, and provides more automation and simplicity. Taylor comments that, "the UI looks to have a nice look and feel including graphical model development flows, easy access to the data, nice little micro graphs when browsing data records and more." On using Oracle Data Mining with Exadata, Taylor writes, "Oracle says that the use of the ODM routines in the Exadata kernel is faster than running a native ODM model in the database by a factor of 2 and that this increases as more joins are used. This could mean that ODM outperforms even third party in-database analytics." Taylor concludes his blog with a positive overall review, stating that "ODM is a nice product for Oracle database customers and well worth looking into. The new UI will only make it more so." Read the blog. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • SQL SERVER – First Month as DBA Trainee – Disasters and Recovery

    - by pinaldave
    This blog post is written in response to the T-SQL Tuesday hosted by Allen Kinsel. He has selected very interesting subject for T-SQL Tuesday – Disaster and Recovery. This subject took me in past – my past. There were various things, I had done or proposed when I started very first month as a DBA trainee. I was tagged along with very senior DBA in my organization who always protected me or correct my mistake. He was great guy and totally understand the young mind of over-enthusiastic Trainee DBA. I respect him very much. Here are few things which I had learned in my very first month (not necessarily I have practices them on production). Never compress (zip) native backup using any tools, when disaster happen sometime the extra time to un-compress the database can be too long and not acceptable for business SLA Do not truncate logs After restoring full database backup – only restore latest differential back, no need to restore all the backup Always write WHERE condition when deleting and updating Sr. DBA always advised me – always keep your résumé ready and car ready – you never know when you can not recover disaster! Well for sure it was a joke. Today’s T-SQL Tuesday remind me of my very first month as DBA trainee. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Best Practices, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • generate EF model by System.Diagnostics.Process

    - by loviji
    Hello, after read this article i tried generate EF model by System.Diagnostics.Process: Process myProcess = new Process(); var cs = "Data Source=.\\SQLEXPRESS; Initial Catalog=uqs; Integrated Security=SSPI"; myProcess.StartInfo.FileName = @"C:\Windows\Microsoft.NET\Framework\v3.5\EdmGen.exe"; myProcess.StartInfo.Arguments = "/mode:fullgeneration /c:"+cs+" project:School /entitycontainer:SchoolEntities /namespace:SchoolModel /language:CSharp "; myProcess.Start(); 1.but i haven't get a result, because i can't do well formed arguments string. As I tried, there have many quotes. how to organize argument string? 2.how can i catch if generation completed or have occurred errors?

    Read the article

  • XOLO X900–First mobile phone with Intel Power

    - by Rekha
    XOLO X900, XOLO’s offering the world’s first smart phone with the power of Intel inside® shaking hands with LAVA International Ltd., India’s fastest growing handset brands. The R&D Centre is in Shenzhan (China) and Bangalore (India). The smart phone has a fast web browsing with the 1.6 GHz Intel processor and smooth multi-tasking process using Intel patented Hyper Threading technology.It has an optimum battery usage, 4.03” hi-resolution of 1024X600 pixels LCD screen to ensure crisp text and vibrant images, HDMI Output port for TV, full HD 1080p playback and dual speakers. It has a camera of 8MP HD camera with certain DSLR like features allowing to click upto 10 photos in less than a second. 3D and HD gaming is immensely realistic with 400 MHz Graphics Processing Unit. The Operating System used here is Android 2.3 (Gingerbread) and upgradable to Android 4.0. It has the GPS facility and rear and front cameras with 8MP and 1.3MP respectively.  They have enabled Accelerometer, Gyroscope, Magnetometer, Ambient light sensor and Proximity sensor in this smart phone. Intel’s smartphone venture is beginning in India first. It is said to be available for sale in Indian from April 23, 2011 onwards. The price is at a best-buy price of INR 22,000 approximately. The smartphone will be available at the Indian retail chain Croma. The phone will available in other retail stores and online stores from early May. The company is launching the smartphone in India first and a more powerful handset in China later this year. According to their success in India and China, Intel is planning to come into Europe and US market. Till then, Intel smartphones are only for Indian buyers. You can more technical information from the XOLO’s site.

    Read the article

  • Change notification in EF EntityCollection

    - by Savvas Sopiadis
    Hi everybody! In a Silverlight 4 proj i'm using WCF RIA services, MVVM principles and EF 4. I 'm running into this situation: created an entity called Category and another one called CategoryLocale (automated using VS, no POCO). The relation between them is 1 to N respectively (one Category can have many CategoryLocales), so trough this relationship one can implement master-detail scenarios. Everytime i change a property in the master record (Category) i get a notifypropertychanged notification raised. But: whenever i change a property in the detail (CategoryLocales) i don't get anything raised. The detail part is bound to a Datagrid like this: <sdk:DataGrid Grid.Row="3" Grid.ColumnSpan="2" ItemsSource="{Binding SelectedRecord.CategoryLocales,Mode=TwoWay}" AutoGenerateColumns="False" VerticalScrollBarVisibility="Auto" > Any help is appreciated! Thanks in advance

    Read the article

  • How to perform a many-to-many Linq query with Include in the EF.

    - by despart
    Hi, I don't know how to perform this query using Linq and the EF. Imagine I have three tables A, B and C. A and B have a many-to-many relationship. B and C have a 1-to-many relationship. I want to obtain records from B including C but filtering from A's Id. I can get easily the records from B: var b = Context.A.Where(x => x.Id.Equals(aId)).SelectMany(x => x.B); but when I try to include C I don't know how to do it: //This doesn't work var b = Context.A.Where(x => x.Id.Equals(aId)).SelectMany(x => x.B.Include("C")); Also I've tried this with no luck (it is equivalent to the above): //Not working var b = (from a in Context.A.Where(x => x.Id.Equals(aId)) from b in a.B.Include("C") select b); Thanks for your help.

    Read the article

  • Mapped Stored Procedures in EF 1

    - by michael.lukatchik
    All, I'm using mapped Stored Procedures in EF 1. I've completed the following steps: 1) I've created my INSERT, UPDATE, and DELETE queries in SQL Server. 2) I've built the EDMX and imported the INSERT, UPDATE, and DELETE sprocs as part of my model. 3) I've set up a Stored Procedure Mapping on a table inside of my EDMX file. The INSERT, UPDATE, and DELETE sprocs were mapped accordingly. Using this approach, I would expect to rebuild the application (and mine builds successfully) and then see the Stored Procedures as available function names via my EDMX object, such as: _entities.InsertComment(..), _entities.UpdateComment(..), and _entities.DeleteComment(..) Intellisense is not picking these names up and I can't figure out why. If I perform these same steps using EF4, the function names are automatically picked up by Intellisense after adding the Stored Procedure Mappings. Is this a bug in EF1? Is there something else I should be doing? Thanks in advance, Mike

    Read the article

  • Gathering statistics for an Oracle WebCenter Content Database

    - by Nicolas Montoya
    Have you ever heard: "My Oracle WebCenter Content instance is running slow. I checked the memory and CPU usage of the application server and it has plenty of resources. What could be going wrong?An Oracle WebCenter Content instance runs on an application server and relies on a database server on the back end. If your application server tier is running fine, chances are that your database server tier may host the root of the problem. While many things could cause performance problems, on active Enterprise Content Management systems, keeping database statistics updated is extremely important.The Oracle Database have a set of built-in optimizer utilities that can help make database queries more efficient. It is strongly recommended to update or re-create the statistics about the physical characteristics of a table and the associated indexes in order to maximize the efficiency of optimizers. These physical characteristics include: Number of records Number of pages Average record length The frequency with which you need to update statistics depends on how quickly the data is changing. Typically, statistics should be updated when the number of new items since the last update is greater than ten percent of the number of items when the statistics were last updated. If a large amount of documents are being added or removed from the system, the a post step should be added to gather statistics upon completion of this massive data change. In some cases, you may need to collect statistics in the middle of the data processing to expedite its execution. These proceses include but are not limited to: data migration, bootstrapping of a new system, records management disposition processing (typically at the end of the calendar year), etc. A DOCUMENTS table with a ten million rows will often generate a very different plan than a table with just a thousand.A quick check of the statistics for the WebCenter Content (WCC) Database could be performed via the below query:SELECT OWNER, TABLE_NAME, NUM_ROWS, BLOCKS, AVG_ROW_LEN,TO_CHAR(LAST_ANALYZED, 'MM/DD/YYYY HH24:MI:SS')FROM DBA_TABLESWHERE TABLE_NAME='DOCUMENTS';OWNER                          TABLE_NAME                       NUM_ROWS------------------------------ ------------------------------ ----------    BLOCKS AVG_ROW_LEN TO_CHAR(LAST_ANALYZ---------- ----------- -------------------ATEAM_OCS                      DOCUMENTS                            4172        46          61 04/06/2012 11:17:51This output will return not only the date when the WCC table DOCUMENTS was last analyzed, but also it will return the <DATABASE SCHEMA OWNER> for this table in the form of <PREFIX>_OCS.This database username could later on be used to check on other objects owned by the WCC <DATABASE SCHEMA OWNER> as shown below:SELECT OWNER, TABLE_NAME, NUM_ROWS, BLOCKS, AVG_ROW_LEN,TO_CHAR(LAST_ANALYZED, 'MM/DD/YYYY HH24:MI:SS')FROM DBA_TABLESWHERE OWNER='ATEAM_OCS'ORDER BY NUM_ROWS ASC;...OWNER                          TABLE_NAME                       NUM_ROWS------------------------------ ------------------------------ ----------    BLOCKS AVG_ROW_LEN TO_CHAR(LAST_ANALYZ---------- ----------- -------------------ATEAM_OCS                      REVISIONS                            2051        46         141 04/09/2012 22:00:22ATEAM_OCS                      DOCUMENTS                            4172        46          61 04/06/2012 11:17:51ATEAM_OCS                      ARCHIVEHISTORY                       4908       244         218 04/06/2012 11:17:49OWNER                          TABLE_NAME                       NUM_ROWS------------------------------ ------------------------------ ----------    BLOCKS AVG_ROW_LEN TO_CHAR(LAST_ANALYZ---------- ----------- -------------------ATEAM_OCS                      DOCUMENTHISTORY                      5865       110          72 04/06/2012 11:17:50ATEAM_OCS                      SCHEDULEDJOBSHISTORY                10131       244         131 04/06/2012 11:17:54ATEAM_OCS                      SCTACCESSLOG                        10204       496         268 04/06/2012 11:17:54...The Oracle Database allows to collect statistics of many different kinds as an aid to improving performance. The DBMS_STATS package is concerned with optimizer statistics only. The database sets automatic statistics collection of this kind on by default, DBMS_STATS package is intended for only specialized cases.The following subprograms gather certain classes of optimizer statistics:GATHER_DATABASE_STATS Procedures GATHER_DICTIONARY_STATS Procedure GATHER_FIXED_OBJECTS_STATS Procedure GATHER_INDEX_STATS Procedure GATHER_SCHEMA_STATS Procedures GATHER_SYSTEM_STATS Procedure GATHER_TABLE_STATS ProcedureThe DBMS_STATS.GATHER_SCHEMA_STATS PL/SQL Procedure gathers statistics for all objects in a schema.DBMS_STATS.GATHER_SCHEMA_STATS (    ownname          VARCHAR2,    estimate_percent NUMBER   DEFAULT to_estimate_percent_type                                                 (get_param('ESTIMATE_PERCENT')),    block_sample     BOOLEAN  DEFAULT FALSE,    method_opt       VARCHAR2 DEFAULT get_param('METHOD_OPT'),   degree           NUMBER   DEFAULT to_degree_type(get_param('DEGREE')),    granularity      VARCHAR2 DEFAULT GET_PARAM('GRANULARITY'),    cascade          BOOLEAN  DEFAULT to_cascade_type(get_param('CASCADE')),    stattab          VARCHAR2 DEFAULT NULL,    statid           VARCHAR2 DEFAULT NULL,    options          VARCHAR2 DEFAULT 'GATHER',    objlist          OUT      ObjectTab,   statown          VARCHAR2 DEFAULT NULL,    no_invalidate    BOOLEAN  DEFAULT to_no_invalidate_type (                                     get_param('NO_INVALIDATE')),  force             BOOLEAN DEFAULT FALSE);There are several values for the OPTIONS parameter that we need to know about: GATHER reanalyzes the whole schema     GATHER EMPTY only analyzes tables that have no existing statistics GATHER STALE only reanalyzes tables with more than 10 percent modifications (inserts, updates,   deletes) GATHER AUTO will reanalyze objects that currently have no statistics and objects with stale statistics. Using GATHER AUTO is like combining GATHER STALE and GATHER EMPTY. Example:exec dbms_stats.gather_schema_stats( -   ownname          => '<PREFIX>_OCS', -   options          => 'GATHER AUTO' -);

    Read the article

  • WPF Composite - Expose EF model to all modules

    - by Tony
    Hi All, I have an application that uses WPF Composite, and I have an issue. I've got a big database that is attached to the application and I need it exposed to different modules as part of the application. What is the best way to expose my Entity Framework model to all my different modules and views inside them? Do I have one EF model or a separate one for each module and then only the tables that each module needs. The only problem being that some tables have a relationship and will have different views and those views will be in different modules. Any ideas how to resolve this?

    Read the article

  • Cursor seems to freeze in the first attempt of typing - Unity 3D, 12.04

    - by Denis
    It happens in the first attempt of typing, no matter is after the startup, or 5 minutes later, or then after. The cursor (or maybe it's the system) seems to freeze, no matter the application I use, taking up 5 sec to appear what is typed. Subsequently, everything is normal, using another applications. @Anwar Shah suggested it could be a daemon waiting to run before the lauching of the first application. Turning off Zeitgest didn't help. It occurs only with Unity-3d. Tested with Unity-2d, everything is fine. Tried to change some Compiz settings, nothing worked, although not tested with every single parameter. Also I deactivated Ati proprietary driver, no effect. My system: AMD E350 1.6Gh, 2G-Ram, ATI graphics - Ubuntu 12.04, 64bits. Update 1: the cursor is blinking normally before I start typing. After the first character (which is not showed), seems to freeze, taking 5 seconds to get normal again. Very annoying, specially when you want to access login sites. Update 2: I tested on a different and old machine (Athlon 64 4800 x2, 4Gb ram, no problems - takes 2 seconds, acceptable. I think it could be related to my specific hardware (Samsung RV415), but not sure about it. Anyone experiencing something similar? Is that what I should expect, or can be fixed or improved? Thanks.

    Read the article

  • Database Migration Scripts: Getting from place A to place B

    - by Phil Factor
    We’ll be looking at a typical database ‘migration’ script which uses an unusual technique to migrate existing ‘de-normalised’ data into a more correct form. So, the book-distribution business that uses the PUBS database has gradually grown organically, and has slipped into ‘de-normalisation’ habits. What’s this? A new column with a list of tags or ‘types’ assigned to books. Because books aren’t really in just one category, someone has ‘cured’ the mismatch between the database and the business requirements. This is fine, but it is now proving difficult for their new website that allows searches by tags. Any request for history book really has to look in the entire list of associated tags rather than the ‘Type’ field that only keeps the primary tag. We have other problems. The TypleList column has duplicates in there which will be affecting the reporting, and there is the danger of mis-spellings getting there. The reporting system can’t be persuaded to do reports based on the tags and the Database developers are complaining about the unCoddly things going on in their database. In your version of PUBS, this extra column doesn’t exist, so we’ve added it and put in 10,000 titles using SQL Data Generator. /* So how do we refactor this database? firstly, we create a table of all the tags. */IF  OBJECT_ID('TagName') IS NULL OR OBJECT_ID('TagTitle') IS NULL  BEGIN  CREATE TABLE  TagName (TagName_ID INT IDENTITY(1,1) PRIMARY KEY ,     Tag VARCHAR(20) NOT NULL UNIQUE)  /* ...and we insert into it all the tags from the list (remembering to take out any leading spaces */  INSERT INTO TagName (Tag)     SELECT DISTINCT LTRIM(x.y.value('.', 'Varchar(80)')) AS [Tag]     FROM     (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )  /* we can then use this table to provide a table that relates tags to articles */  CREATE TABLE TagTitle   (TagTitle_ID INT IDENTITY(1, 1),   [title_id] [dbo].[tid] NOT NULL REFERENCES titles (Title_ID),   TagName_ID INT NOT NULL REFERENCES TagName (Tagname_ID)   CONSTRAINT [PK_TagTitle]       PRIMARY KEY CLUSTERED ([title_id] ASC, TagName_ID)       ON [PRIMARY])        CREATE NONCLUSTERED INDEX idxTagName_ID  ON  TagTitle (TagName_ID)  INCLUDE (TagTitle_ID,title_id)        /* ...and it is easy to fill this with the tags for each title ... */        INSERT INTO TagTitle (Title_ID, TagName_ID)    SELECT DISTINCT Title_ID, TagName_ID      FROM        (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )    INNER JOIN TagName ON TagName.Tag=LTRIM(x.y.value('.', 'Varchar(80)'))    END    /* That's all there was to it. Now we can select all titles that have the military tag, just to try things out */SELECT Title FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE tagname.tag='Military'/* and see the top ten most popular tags for titles */SELECT Tag, COUNT(*) FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  GROUP BY Tag ORDER BY COUNT(*) DESC/* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID So we’ve refactored our PUBS database without pain. We’ve even put in a check to prevent it being re-run once the new tables are created. Here is the diagram of the new tag relationship We’ve done both the DDL to create the tables and their associated components, and the DML to put the data in them. I could have also included the script to remove the de-normalised TypeList column, but I’d do a whole lot of tests first before doing that. Yes, I’ve left out the assertion tests too, which should check the edge cases and make sure the result is what you’d expect. One thing I can’t quite figure out is how to deal with an ordered list using this simple XML-based technique. We can ensure that, if we have to produce a list of tags, we can get the primary ‘type’ to be first in the list, but what if the entire order is significant? Thank goodness it isn’t in this case. If it were, we might have to revisit a string-splitter function that returns the ordinal position of each component in the sequence. You’ll see immediately that we can create a synchronisation script for deployment from a comparison tool such as SQL Compare, to change the schema (DDL). On the other hand, no tool could do the DML to stuff the data into the new table, since there is no way that any tool will be able to work out where the data should go. We used some pretty hairy code to deal with a slightly untypical problem. We would have to do this migration by hand, and it has to go into source control as a batch. If most of your database changes are to be deployed by an automated process, then there must be a way of over-riding this part of the data synchronisation process to do this part of the process taking the part of the script that fills the tables, Checking that the tables have not already been filled, and executing it as part of the transaction. Of course, you might prefer the approach I’ve taken with the script of creating the tables in the same batch as the data conversion process, and then using the presence of the tables to prevent the script from being re-run. The problem with scripting a refactoring change to a database is that it has to work both ways. If we install the new system and then have to rollback the changes, several books may have been added, or had their tags changed, in the meantime. Yes, you have to script any rollback! These have to be mercilessly tested, and put in source control just in case of the rollback of a deployment after it has been in place for any length of time. I’ve shown you how to do this with the part of the script .. /* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID …which would be turned into an UPDATE … FROM script. UPDATE titles SET  typelist= ThisTaglistFROM     (SELECT title_ID, title, STUFF(    (SELECT ','+tagname.tag FROM titles thisTitle      INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID      INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID    WHERE ThisTitle.title_id=titles.title_ID    ORDER BY CASE WHEN tagname.tag=titles.[type] THEN 1 ELSE 0  END DESC    FOR XML PATH(''), TYPE).value('.', 'varchar(max)')    ,1,1,'')  AS ThisTagList  FROM titles)fINNER JOIN Titles ON f.title_ID=Titles.title_ID You’ll notice that it isn’t quite a round trip because the tags are in a different order, though we’ve managed to make sure that the primary tag is the first one as originally. So, we’ve improved the database for the poor book distributors using PUBS. It is not a major deal but you’ve got to be prepared to provide a migration script that will go both forwards and backwards. Ideally, database refactoring scripts should be able to go from any version to any other. Schema synchronization scripts can do this pretty easily, but no data synchronisation scripts can deal with serious refactoring jobs without the developers being able to specify how to deal with cases like this.

    Read the article

  • Oracle?????????????~????????????????????

    - by Yusuke.Yamamoto
    RDBMS ???????·????????????????????????????????????????????????????????????????????????? ????????Oracle ?????????????????????????????????? Oracle Database ???????????????????????????????? Oracle????????????? ??????????????????????????????????? ?????????????????????????????? ????????????????????????????????????????????????? ??????????????????????????????????????????????????????????????? Oracle Database ?? Compile Time ???????? ?????????SQL??????Oracle Database ????????????????????????????????????????????????????????????????? ????????????????????????????????????????????? RDBMS ???????????????????? ???????????????? Oracle Database ????????????????????????? ?????????????Oracle Database ????????????????????????????RDBMS ???????????????????????????????????????????????????????????????????????????????????????????? ??????????????????????????????????????????????? ?????????????????????????????????????????? ?????????????????????????????????????????????????? ?????????????? Oracle Database ???????????????????????? Oracle Database ????????????????????? "Oracle ????????" ?????????? Sustaining Engineering?? ?(??? ???) ???????????????? Sustaining Engineering ????????????????????????Oracle Database ???????????????????????? Oracle????????????????????????! Oracle?????????????

    Read the article

  • ASP.Net MVC 2 / EF 4 Reference Issue

    - by Eric J.
    My ASP.Net MVC 2 project references a Domain project where POCO business objects are defined and a Data project where EF 4 POCO persistence is implemented. Things were running well until I had a little fussiness with my version control provider (rollback to previous version left me with merge conflicts). Now, upon launching the MVC 2 project, I get a runtime error: The type 'System.Data.Objects.DataClasses.IEntityWithKey' is defined in an assembly that is not referenced. You must add a reference to assembly 'System.Data.Entity, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'. However, every project references System.Data.Entity (same version). If I remove the reference to System.Data.Entity from the MVC 2 project, I get the same message as a compile-time error. I'm pretty sure something got messed up when I had the version control issue, but really not sure where to look for this one.

    Read the article

  • My first blog post…

    - by steveh99999
    I’ve been meaning to start a blog for a while now, (OK, for several years…..) - finally now, here it begins First post, something really simple but, a wise-man once told me about the best way to improve SQL server performance. Store Less Data. That's it.. that's all there is to it... Over the years, I've seen the following :- -  a 200Gb database which held 3 days data. Once business requirements changed, we were able to hold only 1 days data in this database. -  a table developed by DBAs to hold application table cardinality information - that information was collected at 2 hour intervals every day for 7 years ! After 7 years the DBA space-info table had become the largest table in the database - 60 million rows !  It was a simple change to remove alot of the historical intra-day data and change the schedule to run only once per evening. Suddenly that table held 6 million rows instead of 60 million.... - lots of backup and restore history held in msdb. See this post by Brent Ozar for more details on this issue. Imagine how much faster the backups, DBCC Checks and reindexes ran when the above 3 changes were implemented ?   How often do you review your big databases \ tables to see if you’re actually holding only data that is really required by the business ?

    Read the article

  • Returning EF entities using WCF - Read only web service / public API

    - by alex
    I'm currently migrating an application from Linq-to-SQL & ASP.net Web Services (asmx) to Entity Framework and WCF. My question is, I have a bunch of POCO classes which i have xml mapping files for (for the linq to sql) I've replaced my linq to sql with an entity framework data model I've got an interface - something like IService - that has all the methods on it that i need my service to implement - for example: Product[] GetProductsByKeyword(string keyword); In the above case, Product is a POCO. I now have them as entities within my ef data model - i'm using .net 4, and could take advantage of poco support, but don't really see the need - This service is strictly read only. What's the best way of returning entities in my WCF service? I want it to support other client platforms, not just .net (so php guys could use it)

    Read the article

  • Best way to attach row from datagrid to EF.

    - by AKoran
    Using MVVM and EF...I have a datagrid binding to a View Model (using ObservableCollection). The view model has a save command which simply calls the SaveChanges command of the Data Context. However, when a user adds a new row to the datagrid, the new entity is detached. Is there any easy way to automatically attach it when it gets created. Currently, I'm having to do this in the Save command of my View Model and it seems a bit clunky: foreach (var dataItem in _DataList) // where _DataList is the ObservableCollection { if (dataItem.EntityState == EntityState.Detached) { _DataContext.AddToTestTables(dataItem); } } _DataContext.SaveChanges();

    Read the article

  • Building My First JavaFX Application Using Netbeans 7.1

    - by javafx4you
    Angela Caicedo, Oracle Technology Evangelist for JavaFX, has released a series of videos demonstrating how to build a simple JavaFX application using Netbeans 7.1. This video series is a great introduction to using JavaFX and takes the viewer through easy to follow, step-by-step instructions, including example code and showing the results along the way. These videos are highly recommend to anyone who is new to JavaFX and is looking for a quick getting started guide.  The first video provides an introduction to the the application and shows viewers the end result of the exercises that will be demonstrated throughout the series. The second video (Part 1) demonstrates how to get started creating the application from an empty project to build your first JavaFX application in Netbeans 7.1. Instructions include defining the components, creating and inserting image packages, adding the resources you want to make available in your application, setting the clipping area for you image, and setting the style for your image to create a transparent window. The third video (Part 2) demonstrates how to handle events and binding in the sample application using JavaFX. Watch the first 3 episodes now and stay tuned for future installments in this series on the Java YouTube channel.

    Read the article

  • Doing your first mock with JustMock

    - by mehfuzh
    In this post, i will start with a  more traditional mocking example that  includes a fund transfer scenario between two different currency account using JustMock.Our target interface that we will be mocking looks similar to: public interface ICurrencyService {     float GetConversionRate(string fromCurrency, string toCurrency); } Moving forward the SUT or class that will be consuming the  service and will be invoked by user [provided that the ICurrencyService will be passed in a DI style] looks like: public class AccountService : IAccountService         {             private readonly ICurrencyService currencyService;               public AccountService(ICurrencyService currencyService)             {                 this.currencyService = currencyService;             }               #region IAccountService Members               public void TransferFunds(Account from, Account to, float amount)             {                 from.Withdraw(amount);                 float conversionRate = currencyService.GetConversionRate(from.Currency, to.Currency);                 float convertedAmount = amount * conversionRate;                 to.Deposit(convertedAmount);             }               #endregion         }   As, we can see there is a TransferFunds action implemented from IAccountService  takes in a source account from where it withdraws some money and a target account to where the transfer takes place using the provided conversion rate. Our first step is to create the mock. The syntax for creating your instance mocks is pretty much same and  is valid for all interfaces, non-sealed/sealed concrete instance classes. You can pass in additional stuffs like whether its an strict mock or not, by default all the mocks in JustMock are loose, you can use it as default valued objects or stubs as well. ICurrencyService currencyService = Mock.Create<ICurrencyService>(); Using JustMock, setting up your expectations and asserting them always goes with Mock.Arrang|Assert and this is pretty much same syntax no matter what type of mocking you are doing. Therefore,  in the above scenario we want to make sure that the conversion rate always returns 2.20F when converting from GBP to CAD. To do so we need to arrange in the following way: Mock.Arrange(() => currencyService.GetConversionRate("GBP", "CAD")).Returns(2.20f).MustBeCalled(); Here, I have additionally marked the mock call as must. That means it should be invoked anywhere in the code before we do Mock.Assert, we can also assert mocks directly though lamda expressions  but the more general Mock.Assert(mocked) will assert only the setups that are marked as "MustBeCalled()”. Now, coming back to the main topic , as we setup the mock, now its time to act on it. Therefore, first we create our account service class and create our from and to accounts respectively. var accountService = new AccountService(currencyService);   var canadianAccount = new Account(0, "CAD"); var britishAccount = new Account(0, "GBP"); Next, we add some money to the GBP  account: britishAccount.Deposit(100); Finally, we do our transfer by the following: accountService.TransferFunds(britishAccount, canadianAccount, 100); Once, everything is completed, we need to make sure that things were as it is we have expected, so its time for assertions.Here, we first do the general assertions: Assert.Equal(0, britishAccount.Balance); Assert.Equal(220, canadianAccount.Balance); Following, we do our mock assertion,  as have marked the call as “MustBeCalled” it will make sure that our mock is actually invoked. Moreover, we can add filters like how many times our expected mock call has occurred that will be covered in coming posts. Mock.Assert(currencyService); So far, that actually concludes our  first  mock with JustMock and do stay tuned for more. Enjoy!!

    Read the article

  • First and Follow Sets for a Grammar

    - by Aimee Jones
    I'm studying for a Compiler Construction module I'm doing and I have a sample question as follows: Calculate the FIRST and FOLLOW sets for the following grammar.. S -> uBDz B -> Bv B -> w D -> EF E -> y E -> e F -> x F -> e I have tried to figure it out so far but I'm a bit unsure if I'm correct. Could someone verify if I'm doing it right, and if not, what am I missing? My answer is below: FIRST | FOLLOW S | {u} | {$} B | {w} | {y,x,v,z} D | {y,e,x} | {z} E | {y,e} | {x,z} F | {x,e} | {z}

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >