Search Results

Search found 3186 results on 128 pages for 'snowflake schema'.

Page 96/128 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • How to: Avoid Inserting Related Entities?

    - by niaher
    I have this schema: I want to insert an Order with OrderItems into database, so I wrote this method: public void SaveOrder(Order order) { using (var repository = new StoreEntities()) { // Add order. repository.Orders.AddObject(order); // Add order items. foreach (OrderItem orderItem in order.OrderItems) { repository.OrderItems.AddObject(orderItem); } repository.SaveChanges(); } } Everything is inserted just fine, except that new Product records are inserted too, which is not what I want. What I want is to insert Order and its OrderItems, without going any further down the object graph. How can that be achieved? Any help is really appreciated, thank you.

    Read the article

  • significance of index name in creating an index (mySQL)

    - by Will
    I've done something like this in order to use on duplicate key update: CREATE UNIQUE INDEX blah on mytable(my_col_to_make_an_index); and its worked just fine. I'm just not sure what the purpose of the index name is -- in this case 'blah'. The stuff I've read says to use one but I can't fathom why. It doesn't seem to be used in queries, although I can see it if I export the schema. So ... what purpose does the index name serve? If it helps the line in the CREATE TABLE ends up looking like: UNIQUE KEY `clothID` (`clothID`)

    Read the article

  • What is the best way to restore(rollback) data in an application to a specified state(date) ?

    - by panzerschreck
    Hello, An example would set the context right, the example below captures the various states of the entity, which needs to be reverted(rolled back) . State 1 - Recorded on 01-Mar-2010 Column1 Column2 Data1 0.56 State 2 - Recorded on 02-Mar-2010 Column1 Column2 Data1 0.57 State 3 - Recorded on 03-Mar-2010 Column1 Column2 Data1 0.58 User notices that state3 is not what he intended to be in, decides to revert back to state2. One approach that I can think of, without modifying the entity is via "auditing" all the inserts/updates, as below, the rollback information captures the data just before the updates/modifications on the entity, so that it can be applied in an order when you need to revert.Please note that changing the entity's schema, is not an option. Rollback - R1 recorded on 01-Mar-2010 Column1 Column2 Data1 0.56 Rollback - R2 Recorded on 02-Mar-2010 Column1 Column2 Data1 0.56 Rollback - R3 Recorded on 03-Mar-2010 Column1 Column2 Data1 0.57 So, to get to state2 , we would start with rollback information R1,apply R2 onto it. Is there a better approach to achieve this ? Thanks for your time.

    Read the article

  • References/walkthroughs for maintaining database schemas with Visual Studio 2010?

    - by user206356
    I have Visual Studio 2010 Beta 2 and SQL Server 2008 installed. I'm working with a populated database and want to modify various column types. SQL Server Management Studio requires me to drop tables to do this, and get pretty finicky given my moderate level of knowledge of SQL Server. However, I heard the new database project type supports changing the database schema to the desired format and it will handle creating and running all the scripts to implement the changes. I've created a VS2010 database project using the existing database as the source, but so far haven't had much luck figuring out the appropriate method to make the changes without getting an error. As a result, I'm looking for any reference info I can find on using VS2010's capabilities in this area. Any suggestions?

    Read the article

  • How can I create a sample SQLLite DB for my iPhone app?

    - by Dr Dork
    I'm diving in to iPhone development and I'm building an iPhone app that uses the Core Data framework and my first task will be to get the model setup with a few that will display it. Thus far, I have the model defined and my Managed Object Files created, but I don't have a database with any sample data. What's a quick way to create a DB that conforms to my schema? Are there any tools that can generate a sample DB using my schemas? Is there a good tool I can use to directly manipulate the data in DB for testing purposes? Thanks in advance for your help! I'm going to continue researching this question right now.

    Read the article

  • Can XSD elements have more than one <annotation>?

    - by Scott
    I have a common data schema in XSD that is used by two different applications, A and B, each uses the data differently. I want to document the different business rules per application. Can I do this? <xs:complexType name="Account"> <xs:annotation app="A"> <xs:documentation> The Account entity must be used this way for app A </xs:documentation> </xs:annotation> <xs:annotation app="B"> <xs:documentation> The Account entity must be used this way for app B </xs:documentation> </xs:annotation> <xs:complexContent> ...

    Read the article

  • Easy way to compute how close an auto_increment is to its maximum value?

    - by David M
    So yesterday we had a table that has an auto_increment PK for a smallint that reached its maximum. We had to alter the table on an emergency basis, which is definitely not how we like to roll. Is there an easy way to report on how close each auto_increment field that we use is to its maximum? The best way I can think of is to do a SHOW CREATE TABLE statement, parse out the size of the auto-incremented column, then compare that to the AUTO_INCREMENT value for the table. On the other hand, given that the schema doesn't change very often, should I store information about the columns' maximum values and get the current AUTO_INCREMENT with SHOW TABLE STATUS?

    Read the article

  • Programming to interfaces while mapping with Fluent NHibernate.

    - by Lucious
    Fluent Mapping I Have the following scenario public class CustomerMap : ClassMap { public CustomerMap() { Table("Customer"); Id(c = c.Id); Map(c = c.Name); HasMany(c = c.Orders); } } public class OrderMap : ClassMap<IOrder> { public OrderMap() { Table("Orders"); References(o => o.Customer).Access.; Id(o => o.Id); Map(o => o.DateCreated); } } Problems When schema exported the order table has two columns ICustomer_Id,Customer_Id. refers to an unmapped class Order exception Can you please help me out?

    Read the article

  • Embedded analog of CouchDB, same as sqlite for SQL Server

    - by Mike Chaliy
    I like an idea of document oriented databases like CouchDB. I am looking for simple analog. My requirements is just: persistance storage for schema less data; some simple in-proc quering; good to have transactions and versioning; ruby API; map/reduce is aslo good to have; should work on shared hosting What I do not need is REST/HTTP interfaces (I will use it in-proc). Also I do not need all scalability stuff.

    Read the article

  • Hibernate won't autogenerate sequence table

    - by Jason
    I'm trying to use a sequence table to generate keys for my entities. I notice that if I just use the @GeneratedValue(strategy=GenerationType.TABLE) with no explicit generator, Hibernate will automatically create the hibernate_sequences table in my DB if it doesn't exist. This is great. However, I wanted to make some changes to the sequence table, so I created a @TableGenerator like the following: @GeneratedValue(strategy=GenerationType.TABLE, generator="vdat_seq") @TableGenerator(name="vdat_seq", table="VDAT_SEQ", pkColumnName="seq_name", valueColumnName="seq_next_val", allocationSize=1) This works fine if I manually create the VDAT_SEQ table in my schema; Hibernate won't auto-create it anymore. This causes an issue in my unit tests, since I'd rather not have to manually create a table and maintain it on our testing DB. Is there a configuration variable or some other way to get Hibernate to generate the sequence table?

    Read the article

  • Authenticating model - best practices

    - by zerkms
    I come into ASP.NET from php so the reason why i ask my question is because it's totally different nature of how application works and handles requests. well, i have an exists table with user creditians, such as: id, login, password (sha hashed), email, phone, room i have built custom membership provider so it can handle my own database authentication schema. and now i'm confused, because User.Identity.Name contains only user's login, but not the complete object (i'm using linq2sql to communicate with database and i need in it's User object to work). at php applications i just store user object at some static method at Auth class (or some another), but here at ASP.NET MVC i cannot do this, because static member is shared across all requests and permanent, and not lives within only current request (as it was at php). so my question is: how and where should i retrieve and store linq2sql user object to work with it within current and only current request? (after request processed successfully i expect it will be disposed from memory and on next request will be created again). or i'm following totally wrong way?

    Read the article

  • How to coerce type of ActiveRecord attribute returned by :select phrase on joined table?

    - by tribalvibes
    Having trouble with AR 2.3.5, e.g.: users = User.all( :select => "u.id, c.user_id", :from => "users u, connections c", :conditions => ... ) Returns, e.g.: => [#<User id: 1000>] >> users.first.attributes => {"id"=>1000, "user_id"=>"1000"} Note that AR returns the id of the model searched as numeric but the selected user_id of the joined model as a String, although both are int(11) in the database schema. How could I better form this type of query to select columns of tables backing multiple models and retrieving their natural type rather than String ? Seems like AR is punting on this somewhere. How could I coerce the returned types at AR load time and not have to tack .to_i (etc.) onto every post-hoc access?

    Read the article

  • Process for Upgrading from RedBean 3.5 to RedBean 4

    - by Jay Haase
    I am currently using RedBean version 3.5. I think I would like to move to the latest version of RedBean, version 4. I have found no documentation about upgrade process and have a number of significant questions: Is my RedBean 3.5 database schema compatible 4, or will up have to migrate all of the tables to some new format? Is any of my RedBean 3.5 code compatible with version 4, or wouldI need to rewrite all of my code that uses RedBean 3.5? Would it make more sense to upgrade to Doctrine? As a side note, I am also feeling concerned about RedBean's drop of support for Composer, which I have found to be über helpful in managing the various versions of libraries I am using.

    Read the article

  • Solr alphabetical sorting trouble. Sorting uppercase then lowercase for string type field

    - by Alauddin Ansari
    I've crated a title field with list below: Asking is good But answering is best join the group like this You are the best hey dudes. whass up When I'm sorting this ASC (&sort=title ASC) Asking is good But answering is best You are the best hey dudes. whass up join the group like this and (&sort=title DESC) join the group like this hey dudes. whass up You are the best But answering is best Asking is good But I'm expecting result like: (&sort=title ASC) Asking is good But answering is best hey dudes. whass up join the group like this You are the best schema.xml <field name="title" type="text_general" indexed="true" stored="true"/> <field name="title_sort" type="string" indexed="true" stored="false"/> <copyField source="title" dest="title_sort" /> I'm using title_sort field to sort (also tried title field) Please tell me where I'm going wrong

    Read the article

  • Why do I have to set the max length of every damn text column in the database?

    - by John Leidegren
    Why is it that every RDBMS insists that you tell it what the max length of a text field is going to be... why can't it just infer this information form the data that's put into the database? I've mostly worked with MS SQL Server, but every other database I know also demands that you set these arbitrary limits on your data schema. The reality is that this is not particulay helpful or friendly to work with becuase the business requirements change all the time and almost every day some end-user is trying to put a lot of text into that column. Does any one with some inner working knowledge of a RDBMS know why we just don't infer the limits from the data that's put into the storage? I'm not talking about guessing the type information, but guessing the limits of a particular text column. I mean, there's a reason why I don't use nvarchar(max) on every text column in the database.

    Read the article

  • Need help in understanding a SELECT query

    - by Grant Smith
    I have a following query. It uses only one table (Customers) from Northwind database. I completely have no idea how does it work, and what its intention is. I hope there is a lot of DBAs here so I ask for explanation. particularly don't know what the OVER and PARTITION does here. WITH NumberedWomen AS ( SELECT CustomerId ,ROW_NUMBER() OVER ( PARTITION BY c.Country ORDER BY LEN(c.CompanyName) ASC ) women FROM Customers c ) SELECT * FROM NumberedWomen WHERE women > 3 If you needed the db schema, it is here

    Read the article

  • MySQL Insert Query Randomly Takes a Long Time

    - by ShimmerTroll
    I am using MySQL to manage session data for my PHP application. When testing the app, it is usually very quick and responsive. However, seemingly randomly the response will stall before finally completing after a few seconds. I have narrowed the problem down to the session write query which looks something like this: INSERT INTO Session VALUES('lvg0p9peb1vd55tue9nvh460a7', '1275704013', '') ON DUPLICATE KEY UPDATE sessAccess='1275704013',sessData=''; The slow query log has this information: Query_time: 0.524446 Lock_time: 0.000046 Rows_sent: 0 Rows_examined: 0 This happens about 1 out of every 10 times. The query usually only takes ~0.0044 sec. The table is InnoDB with about 60 rows. sessId is the primary key with a BTREE index. Since this is accessed on every page view, it is clearly not an acceptable execution time. Why is this happening? Update: Table schema is: sessId:varchar(32), sessAccess:int(10), sessData:text

    Read the article

  • Improve SQL query performance

    - by Anax
    I have three tables where I store actual person data (person), teams (team) and entries (athlete). The schema of the three tables is: In each team there might be two or more athletes. I'm trying to create a query to produce the most frequent pairs, meaning people who play in teams of two. I came up with the following query: SELECT p1.surname, p1.name, p2.surname, p2.name, COUNT(*) AS freq FROM person p1, athlete a1, person p2, athlete a2 WHERE p1.id = a1.person_id AND p2.id = a2.person_id AND a1.team_id = a2.team_id AND a1.team_id IN ( SELECT id FROM team, athlete WHERE team.id = athlete.team_id GROUP BY team.id HAVING COUNT(*) = 2 ) GROUP BY p1.id ORDER BY freq DESC Obviously this is a resource consuming query. Is there a way to improve it?

    Read the article

  • validate XML with XSD in c++

    - by katl
    Hi! Im pretty new to both XML validation and C++ (more familiar with Java..) so I guess this is a trivial question. If I have a XML file and a XSD schema, how is the best way to validate them? Id like to to this in c++ without using external libraries, or as little libraries as possible, is it possible? Other ideas of how this can be done? I want a simple solution, the output needed is just if the validation i successful or not. Any help is appreciated // Katarina

    Read the article

  • Help on choosing which SQL Server 2008 scale-out solution to pick (replication, ...)

    - by usr
    I am currently crossing the jungle of SQL Server scale-out technologies like replication, log-shipping, mirroring... I have the following constraints on my choice: I want the read-only load to be spread accross the primary and the secondary (mirror, subscriber) server Write load can be sent directly to the primary server The solution should be nearly maintainance free. Schema changes should just replicate to the secondary server (attention: replication has some serious constraints here as it seems) Written data should be accessible very quickly (in under 1s, but better would be instantaneously) on the secondary server On server failure I can tollerate up to one hour of data loss easily. I am more concerned with easy scalability Here are some options for what I could pick: http://msdn.microsoft.com/en-us/library/bb510414.aspx. Any experience you could share?

    Read the article

  • Generate service layer with Hibernate

    - by gmate
    Hi all! I generate *.hbm.xml mapping files and *.java file from the DB schema, with Hibernate tools. My question is, that is there any option, to generate service classes also? These are the classes where I implement the store(), find(), delete(), etc... methods. I know that for C# there are many solutions to generate almost everything. I'm looking for the same, but with Hibernate. Is there any? Thanks for every reply in advance!

    Read the article

  • Desgining a database with flexible user profile

    - by Mughrabi
    Hi, Am working on a design where I can have flexible attributes for a user & am confused how to continue the design of the schema. I made a table where I kept system needed information called users id username password Now, I wish to create a profile table and have one to one relation where all the other attributes in profile table such as email, first name, last name..etc. My question is, is there a way to add a third table in which even profile will be flexible if my clients need to create a new attribute he/she won't need any customization to the code? Regards,

    Read the article

  • exactly what does rake db:migrate do?

    - by happythenewsad
    Does rake db:migrate only add new migrations, or does it drop all migrations/changes and build everything new? I think rake is throwing an error because it is trying to access a table attribute in migration 040 that was deleted in migration 042. somehow my DB and rake are out of synch and I want to fix them. for you experts out there - is it common for rake to get out of synch with migrations? how can I avoid this (no, I do not hand-edit my schema or rake files).

    Read the article

  • What is the best way to partition large tables in SQL Server?

    - by RyanFetz
    In a recent project the "lead" developer designed a database schema where "larger" tables would be split across two seperate databases with a view on the main database which unioned the two seperate database-tables together. The main database is what the application was driven off of so these tables looked and felt like ordinary tables (except some quirkly things around updating). This seemed like a HUGE performance problem. We do see problems with performance around these tables but nothing to make him change his mind about his design. Just wondering what is the best way to do this, or if it is even worth doing?

    Read the article

  • Optimal search queries

    - by Macros
    Following on from my last question http://stackoverflow.com/questions/2788082/sql-server-query-performance, and discovering that my method of allowing optional parameters in a search query is sub optimal, does anyone have guidelines on how to approach this? For example, say I have an application table, a customer table and a contact details table, and I want to create an SP which allows searching on some, none or all of surname, homephone, mobile and app ID, I may use something like the following: select * from application a inner join customer c on a.customerid = a.id left join contact hp on (c.id = hp.customerid and hp.contacttype = 'homephone') left join contact mob on (c.id = mob.customerid and mob.contacttype = 'mobile') where (a.ID = @ID or @ID is null) and (c.Surname = @Surname or @Surname is null) and (HP.phonenumber = @Homphone or @Homephone is null) and (MOB.phonenumber = @Mobile or @Mobile is null) The schema used above isn't real, and I wouldn't be using select * in a real world scenario, it is the construction of the where clause I am interested in. Is there a better approach, either dynamic sql or an alternative which can achieve the same result, without the need for many nested conditionals. Some SPs may have 10 - 15 criteria used in this way

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >