Search Results

Search found 10719 results on 429 pages for 'temp tables'.

Page 298/429 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • Entity Framework Performance Problem

    - by Steve Horn
    I'm hoping that someone can help me understand how to overcome a performance problem I'm running into with the latest version of the Entity Framework. In my test, I created my model from a database consisting of around 80 tables. The problem that I'm running into is that the cost of the very first query I run on a thread is very expensive. If I run without pre-compiling views the first query takes anywhere from 5800 to 6600 milliseconds. If I pre-compile the views (see this article) I can get the initial query cost down to about 2800 to 3200 milliseconds. 3 seconds for each request is still unacceptable for my needs. Subsequent queries are very fast. Can you please help me understand how to eliminate the poor performance of the initial query? I'm using the version of entity framework that ships with Visual Studio 2010 RC.

    Read the article

  • mySQL and general database normalization question

    - by Sinan
    I have question about normalization. Suppose I have an applications dealing with songs. First I thought about doing like this: Songs Table: id | song_title | album_id | publisher_id | artist_id Albums Table: id | album_title | etc... Publishers Table: id | publisher_name | etc... Artists Tale: id | artist_name | etc... Then as I think about normalization stuff. I thought I should get rid of "album_id, publisher_id, and artist_id in songs table and put them in intermediate tables like this. Table song_album: song_id, album_id Table song_publisher song_id, publisher_id Table song_artist song_id, artist_id Now I can't decide which is the better way. I'm not an expert on database design so If someone would point out the right direction. It would awesome. Are there any performance issues between two approaches? Thanks

    Read the article

  • mysql select from a table depending on in which table the data is in

    - by user253530
    I have 3 tables holding products for a restaurant. Products that reside in the bar, food and ingredients. I use php and mysql. I have another table that holds information about the orders that have been made so far. There are 2 fields, the most important ones, that hold information about the id of the product and the type (from the bar, from the kitchen or from the ingredients). I was thinking to write the sql query like below to use either the table for bar products, kitchen or ingredients but it doesn't work. Basically the second table on join must be either "bar", "produse" or "stoc". SELECT K.nume, COUNT(K.cantitate) as cantitate, SUM(K.pret) as pret, P.nume as NumeProduse FROM `clienti_fideli` as K JOIN if(P.tip,bar,produse) AS P ON K.produs = P.id_prod WHERE K.masa=18 and K.nume LIKE 'livrari-la-domiciliu' GROUP BY NumeProduse

    Read the article

  • How to get use text columns in a trigger

    - by Jeremy
    I am trying to use an update trigger in sql 2000 so that when an update occurs, I insert a row into a history table, so I retain all history on a table: CREATE Trigger trUpdate_MyTable ON MyTable FOR UPDATE AS INSERT INTO [MyTableHistory] ( [AuditType] ,[MyTable_ID] ,[Inserted] ,[LastUpdated] ,[LastUpdatedBy] ,[Vendor_ID] ,[FromLocation] ,[FromUnit] ,[FromAddress] ,[FromCity] ,[FromProvince] ,[FromContactNumber] ,[Comment]) SELECT [AuditType] = 'U', D.* FROM deleted D JOIN inserted I ON I.[ID] = D.[ID] GO Of course, I get an error "Cannot use text, ntext, or image columns in the 'inserted' and 'deleted' tables." I tried joining to MyTable instead of deleted, but because the insert triger fires after the insert, it ends up inserting the new record into the history table, when I want the original record. How can I do this and still use text columns?

    Read the article

  • NHibernate: Mapping collections of value types

    - by Anry
    I have a table Order, Transaction, Payment. Class Order has the properties: public virtual Guid Id { get; set; } public virtual DateTime Created { get; set; } ... I added properties: public virtual IList<Transaction> Transactions { get; set; } public virtual IList<Payment> Payments { get; set; } These properties contain a record of tables [Transaction] and [Payment]. How to keep these lists in the database?

    Read the article

  • How to prevent GetOleDbSchemaTable from returning duplicate sheet names from Excel workbook

    - by Richard Bysouth
    Hi I have a function to return a DataView containing info on sheets in an Excel Workbook, as follows: Public Function GetSchemaInfo() As DataView Using connection As New OleDbConnection(GetConnectionString()) connection.Open() Dim schemaTable As DataTable = connection.GetOleDbSchemaTable(OleDbSchemaGuid.Tables, Nothing) connection.Close() Return New DataView(schemaTable) End Using End Function This works fine except that if the workbook has linked data (i.e. pulls its data from another workbook), duplicate sheet names are returned. For example, Workbook1 has a single worksheet, Sheet1. I get 2 rows in the DataView, with the TABLE_NAME field being "Sheet1$" and "Sheet1$_". OK, I could use a RowFilter, but wondered whether there was a better way or why this extra row is returned? thanks Richard

    Read the article

  • Word 2007 Question

    - by Lijo
    Hi Team, While preparing a Word 2007 document, I made a mistake. (Not to say I don't have any other copy of the document) While formatting (as a try) I applied the style "Apply Style to Body to match selection". This caused the document to go totally in a wronfg format - having numbers even in tables. Have you ever faced this? Could you please tell how to correct it? Hope you would be kind enough to answer this even though it is not striclty technical. Thanks Lijo

    Read the article

  • Table and Column names causing problems

    - by craig
    I have an issue when the T4 linq templates generate the classes for my MySql db using subsonic 3. It looks like one of our table names "operator" is causing problems in the Context.cs generated class. In the following line of code in Context.cs Visual Studio sees <operator> as a c# operator and generates a compilation error of "Type expected" public Query<operator> operators { get; set; } Is there anyway I can work around this without having to rename my database table and column names? For example hard coding something in Settings.ttinclude to use or map different names to specific db tables and columns?

    Read the article

  • Sql Server - INSERT INTO SELECT to avoid duplicates

    - by Ashish Gupta
    I have following two tables:- Table1 ------------- ID Name 1 A 2 B 3 C Table2 -------- ID Name 1 Z I need to insert data from Table1 to Table2 and I can use following sytax for the same:- INSERT INTO Table2(Id, Name) SELECT Id, Name FROM Table1 However, In my case duplicate Ids might exist in Table2 (In my case Its Just "1") and I dont want to copy that again as that would throw an error. I can write something like this:- IF NOT EXISTS(SELECT 1 FROM Table2 WHERE Id=1) INSERT INTO Table2 (Id, name) SELECT Id, name FROM Table1 ELSE INSERT INTO Table2 (Id, name) SELECT Id, name FROM Table1 WHERE Table1.Id<>1 Is there a better way to do this without using IF - ELSE? I want to avoid two INSERT INTO-SELECT statements based on some condition. Any help is appreciated.

    Read the article

  • Program to find canonical cover or minimum number of functional dependencies

    - by Sev
    I would like to know if there is a program or algorithm to find canonical cover or minimum number of functional dependencies? For example: If you have: R = (A,B,C) <-- these are tables: A,B,C And dependencies: A ? BC B ? C A ? B AB ? C The canonical cover (or minimum number of dependencies) is: A ? B B ? C Is there a program that can accomplish this? If not, any code/pseudocode to help me write one would be appreciated. Prefer in Python or Java.

    Read the article

  • Problems with contenttypes when loading a fixture in Django

    - by gerdemb
    I am having trouble loading Django fixtures into my MySQL database because of contenttypes conflicts. First I tried dumping the data from only my app like this: ./manage.py dumpdata escola > fixture.json but I kept getting missing foreign key problems, because my app "escola" uses tables from other applications. I kept adding additional apps until I got to this: ./manage.py dumpdata contenttypes auth escola > fixture.json Now the problem is the following constraint violation when I try to load the data as a test fixture: IntegrityError: (1062, "Duplicate entry 'escola-t23aluno' for key 2") It seems the problem is that Django is trying to dynamically recreate contenttypes with different primary key values that conflict with the primary key values from the fixture. This appears to be the same as bug documented here: http://code.djangoproject.com/ticket/7052 The problem is that the recommended workaround is to dump the contenttypes app which I'm already doing!? What gives? If it makes any difference I do have some custom model permissions as documented here: http://docs.djangoproject.com/en/dev/ref/models/options/#permissions

    Read the article

  • Teradata equivalent of persisted computed column (in SQL Server)

    - by Cade Roux
    We have a few tables with persisted computed columns in SQL Server. Is there an equivalent of this in Teradata? And, if so, what is the syntax and are there any limitations? The particular computed columns I am looking at conform some account numbers by removing leading zeros - an index is also created on this conformed account number: ACCT_NUM_std AS ISNULL(CONVERT(varchar(39), SUBSTRING(LTRIM(RTRIM([ACCT_NUM])), PATINDEX('%[^0]%', LTRIM(RTRIM([ACCT_NUM])) + '.' ), LEN(LTRIM(RTRIM([ACCT_NUM]))) ) ), '' ) PERSISTED With the Teradata TRIM function, the trimming part would be a little simpler: ACCT_NUM_std AS COALESCE(CAST(TRIM(LEADING '0' FROM TRIM(BOTH FROM ACCT_NUM))) AS varchar(39)), '' ) I guess I could just make this a normal column and put the code to standardize the account numbers in all the processes which insert into the table. We did this to put the standardization code in one place.

    Read the article

  • How can I best geocode a table of addresses in SQL Server?

    - by ess
    I've got a SQL Server 2008 table with addresses. I've got some C# code that can individually geocode the addresses. I've got a Google Maps API for geocoding. Now I'm trying to figure out the most efficient way to use these resources. I could write a console app that manually updates the tables using my C# library, but the data I have is updated periodically. I will be performing an import routine of some sort and I'm thinking it would be 'simplest' to perform the geocoding as the import occurs. I'm not so strong on SQL Server capabilities, so I'm looking for advice. I've considered letting the import call an assembly I create that would be referenced in SQL Server, but read that Sql Server 2008 has made it virtually impossible to reference your own DLL. So my next guess is having the import call a web service to pass in the address and update the table with the results, but I've not had much luck in finding info on this method. Any advice?

    Read the article

  • How do I create a check constraint?

    - by Zack Peterson
    Please imagine this small database... Diagram Tables Volunteer Event Shift EventVolunteer ========= ===== ===== ============== Id Id Id EventId Name Name EventId VolunteerId Email Location VolunteerId Phone Day Description Comment Description Start End Associations Volunteers may sign up for multiple events. Events may be staffed by multiple volunteers. An event may have multiple shifts. A shift belongs to only a single event. A shift may be staffed by only a single volunteer. A volunteer may staff multiple shifts. Check Constraints Can I create a check constraint to enforce that no shift is staffed by a volunteer that's not signed up for that shift's event? Can I create a check constraint to enforce that two overlapping shifts are never staffed by the same volunteer?

    Read the article

  • A good design pattern for almost similar objects

    - by Sam
    Hello, I have two websites that have an almost identical database schema. the only difference is that some tables in one website have 1 or 2 extra fields that the other and vice versa. I wanted to the same Database Access layer classes to will manipulate both websites. What can be a good design pattern that can be used to handle that little difference. for example, I have a method createAccount(Account account) in my DAO class but the implementation will be slightly different between website A and website B. I know design patterns don't depend on the language but FYI i m working with Perl. Thanks

    Read the article

  • Providing multi-version databases for backward compatibility for production applications/databases.

    - by JavaRocky
    How can I manage multiple versions of a database easily? I have some data (as views as selects for data originating in tables from other schemas), which other database may reference using various means including database synonyms & links. I wish to provide a sort of interface/guarantee in-case future for applications/databases which use this data. All of this is for in the event i need to update the views for correctness or applicability inside my database. How can i achieve this in a maintained, controlled and easy way? I am using Oracle 10g if that matters.

    Read the article

  • ObjectContext ConnectionString Sqlite

    - by codegarten
    I need to connect to a database in Sqlite so i downloaded and installed System.Data.SQLite and with the designer dragged all my tables. The designer created a .cs file with public class Entities : ObjectContext and 3 constructors: 1st public Entities() : base("name=Entities", "Entities") this one load the connection string from App.config and works fine. App.config <connectionStrings> <add name="Entities" connectionString="metadata=res://*/Db.TracModel.csdl|res://*/Db.TracModel.ssdl|res://*/Db.TracModel.msl;provider=System.Data.SQLite;provider connection string=&quot;data source=C:\Users\Filipe\Desktop\trac.db&quot;" providerName="System.Data.EntityClient" /> </connectionStrings> 2nd public Entities(string connectionString) : base(connectionString, "Entities") 3rd public Entities(EntityConnection connection) : base(connection, "Entities") Here is the problem, i already tried n configuration, already used EntityConnectionStringBuilder to make the connection string with no luck. Can you please point me in the right direction!? EDIT(1) How can i construct a valid connection string?!

    Read the article

  • GORM ID generation and belongsTo association ?

    - by fabien-barbier
    I have two domains : class CodeSetDetail { String id String codeSummaryId static hasMany = [codes:CodeSummary] static constraints = { id(unique:true,blank:false) } static mapping = { version false id column:'code_set_detail_id', generator: 'assigned' } } and : class CodeSummary { String id String codeClass String name String accession static belongsTo = [codeSetDetail:CodeSetDetail] static constraints = { id(unique:true,blank:false) } static mapping = { version false id column:'code_summary_id', generator: 'assigned' } } I get two tables with columns: code_set_detail: code_set_detail_id code_summary_id and code_summary: code_summary_id code_set_detail_id (should not exist) code_class name accession I would like to link code_set_detail table and code_summary table by 'code_summary_id' (and not by 'code_set_detail_id'). Note : 'code_summary_id' is define as column in code_set_detail table, and define as primary key in code_summary table. To sum-up, I would like define 'code_summary_id' as primary key in code_summary table, and map 'code_summary_id' in code_set_detail table. How to define a primary key in a table, and also map this key to another table ?

    Read the article

  • How do I delete a foreign key in SQLAlchemy?

    - by Travis
    I'm using SQLAlchemy Migrate to keep track of database changes and I'm running into an issue with removing a foreign key. I have two tables, t_new is a new table, and t_exists is an existing table. I need to add t_new, then add a foreign key to t_exists. Then I need to be able to reverse the operation (which is where I'm having trouble). t_new = sa.Table("new", meta.metadata, sa.Column("new_id", sa.types.Integer, primary_key=True) ) t_exists = sa.Table("exists", meta.metadata, sa.Column("exists_id", sa.types.Integer, primary_key=True), sa.Column( "new_id", sa.types.Integer, sa.ForeignKey("new.new_id", onupdate="CASCADE", ondelete="CASCADE"), nullable=False ) ) This works fine: t_new.create() t_exists.c.new_id.create() But this does not: t_exists.c.new_id.drop() t_new.drop() Trying to drop the foreign key column gives an error: 1025, "Error on rename of '.\my_db_name\#sql-1b0_2e6' to '.\my_db_name\exists' (errno: 150)" If I do this with raw SQL, i can remove the foreign key manually then remove the column, but I haven't been able to figure out how to remove the foreign key with SQLAlchemy? How can I remove the foreign key, and then the column?

    Read the article

  • Best solution for a comment table for multiple content types

    - by KRTac
    I'm currently designing a comments table for a site I'm building. Users will be able to upload images, link videos and add audio files to the profile. Each of these types of content must be commentable. Now I'm wondering what's the best approach to this. My current options are: 1. to have one big comments table and a link tables for every content type (comments_videos, ...) with comment_id and _id. 2. to have comments separated by the type of content their for. So each type of content would have his own comments table with the comments for that type.

    Read the article

  • What happens when auto_increment on integer column reaches the max_value in databases?

    - by Sanoj
    I am implementing a database application and I will use both JavaDB and MySQL as database. I have an ID column in my tables that has integer as type and I use the databases auto_increment-function for the value. But what happens when I get more than 2 (or 4) billion posts and integer is not enough? Is the integer overflowed and continues or is an exception thrown that I can handle? Yes, I could change to long as datatype, but how do I check when that is needed? And I think there is problem with getting the last_inserted_id()-functions if I use long as datatype for the ID-column.

    Read the article

  • horizontally scrollable table column

    - by Alexandru Cucu
    Hello, I would need to build a html table that has a horizontally scrolable column. The scroll should be placed in the column's header. My question first question is: do you know any jquery plug-in that is able to do this? My second question: is this possible using a single table. I've heard that in order to do this you need to use multiple synchronized tables that look like a single one from the user's perspective. Any ideea/advice would be welcomed. Thank you.

    Read the article

  • SQL Server 2005 script with join across Database Servers

    - by Robin Day
    I have the following script which I use to give me a simple "diff" between tables on two different databases. (Note: In reality my comparison is on a lot more than just an ID) SELECT MyTableA.MyId, MyTableB.MyId FROM MyDataBaseA..MyTable MyTableA FULL OUTER JOIN MyDataBaseB..MyTable MyTableB ON MyTableA.MyId = MyTableB.MyId WHERE MyTableA.MyId IS NULL OR MyTableB.MyId IS NULL I now need to run this script on two databases that exist on different servers. At the moment my solution is to backup the database from one server, restore it to the other and then run the script. I'm pretty sure this is possible, however, is this likely to be a can of worms? This is a very rare task I need to perform and if it involves a large number of DB setting changes then I will probably stick to my backup method.

    Read the article

  • Shortest-path algorithms which use a space-time tradeoff?

    - by Chris Mounce
    I need to find shortest paths in an unweighted, undirected graph. There are algorithms which can find a shortest path between two nodes, but this can take time. There are also algorithms for computing shortest paths for all pairs of nodes in the graph, but storing such a lookup table would take lots of disk space. What I'm wondering: Is there an algorithm which offers a space-time tradeoff that's somewhere between these two extremes? In other words, is there a way to speed up a shortest-path search, while using less disk space than would be occupied by an all-pairs shortest-path table? I know there are ways to efficiently store lookup tables for this problem, and I already have a couple of ideas for speeding up shortest-path searches using precomputed data. But I don't want to reinvent the wheel if there's already some established algorithm that solves this problem.

    Read the article

  • Invoicing vs Quoting

    - by FreshCode
    If invoices can be voided, should they be used as quotations? I have an Invoices tables that is created from inventory associated with a Job. I could have a Quotes table as a halfway-house between inventory and invoices, but it feels like I would have duplicate data structures and logic just to handle an "Is this a quote?" bit. From a business perspective, quotes are different from invoices: a quote is sent prior to an undertaking and an invoice is sent once it is complete and payment is due, but how to represent this in my repository and model. What is an elegant way to store and manage quotes & invoices in a database?

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >