Search Results

Search found 79317 results on 3173 pages for 'sql error messages'.

Page 873/3173 | < Previous Page | 869 870 871 872 873 874 875 876 877 878 879 880  | Next Page >

  • Check if table exists in c#

    - by apoorv020
    I want to read data from a table whose name is supplied by a user. So before actually starting to read data, I want to check if the database exists or not. I have seen several pieces of code on the NET which claim to do this. However, they all seem to be work only for SQL server, or for mysql, or some other implementation. Is there not a generic way to do this? (I am already seperately checking if I can connect to the supplied database, so I'm fairly certain that a connection can be opened to the database.)

    Read the article

  • Kohana v3, automatically escape illegal characters?

    - by Dom
    Quick question, does Kohana (version 3) automatically escape data that is passed into ORM::factory..... (and everywhere else that has to do with the database)? For example: $thread = ORM::factory('thread', $this->request->param('id')); Would the data passed in the second argument be auto-escaped before it goes in the SQL query or do I have to manually do it? Probably a stupid question and it's better to be safe than sorry, but yeah... I usually do manually escape the data, but I want to know if Kohana does this for me? Thanks

    Read the article

  • MSSQL Search Proper Names Full Text Index vs LIKE + SOUNDEX

    - by Matthew Talbert
    I have a database of names of people that has (currently) 35 million rows. I need to know what is the best method for quickly searching these names. The current system (not designed by me), simply has the first and last name columns indexed and uses "LIKE" queries with the additional option of using SOUNDEX (though I'm not sure this is actually used much). Performance has always been a problem with this system, and so currently the searches are limited to 200 results (which still takes too long to run). So, I have a few questions: Does full text index work well for proper names? If so, what is the best way to query proper names? (CONTAINS, FREETEXT, etc) Is there some other system (like Lucene.net) that would be better? Just for reference, I'm using Fluent NHibernate for data access, so methods that work will with that will be preferred. I'm using MS SQL 2008 currently.

    Read the article

  • In native C++, how does one use a SqlCe .sdf database?

    - by Omer Sabic
    Is there a simple way, without .NET? I've found some libraries but none for SqlCe 3.5. There is http://sqlcehelper.codeplex.com/ but it's far from done, since a major feature like using a password is not yet implemented. I've looked at the source and it uses OLEdb to handle the database. The offical Microsoft Northwind example (that is shipped with SQL Compact 3.1, but not with 3.5) also doesn't work, I've tried setting it up with no success. Actually I don't have a sample working code. Was anyone able to set it up paired with a passworded .sdf? What are the alternatives? Thanks.

    Read the article

  • Working with images in WCF

    - by hgulyan
    Hi, I have a desktop application that needs to upload/download images to/from service computer over TCP Protocol. At first, I stored images in file system, but I need to in MS SQL DB to compare which solution is better. Number of images is over half a million. I don't know yet will there be any limitation on size of a photo. If you have done smth like that, please, write what your opinion upon this question. Which one is faster, more safe? Which of them works better with this number of photos? If I'll store on DB, do I need to store images apart from all other tables which I use for my application and which type works better - image or varbinary on DB?..and so on. Thank you.

    Read the article

  • Calculate hours difference in datetime

    - by ScG
    I have a series of datetime values. I want to select records with a difference of 2 or more hours between them. 2010-02-11 08:55:00.000 2010-02-11 10:45:00.000 2010-02-11 10:55:00.000 2010-02-11 12:55:00.000 2010-02-11 14:52:00.000 2010-02-11 16:55:00.000 2010-02-11 17:55:00.000 2010-02-11 23:55:00.000 2010-02-12 00:55:00.000 2010-02-12 02:55:00.000 Expected (The next date compared is with the last date that qualified for the 2 hr difference): 2010-02-11 08:55:00.000 2010-02-11 10:55:00.000 2010-02-11 12:55:00.000 2010-02-11 16:55:00.000 2010-02-11 23:55:00.000 2010-02-12 02:55:00.000 I am using SQL 2005 or 2008

    Read the article

  • Trying to use VB to automate some queries. Running into what looks like a string problem

    - by Jeff
    Hi there I'm using MS Access 2003 and I'm trying to execute a few queries at once using VB. When I write out the query in SQL it works fine, but when I try to do it in VB it asks me to "Enter Parameter Value" for DEPA, then DND (which are the first few letters of a two strings I have). Here's the code: Option Compare Database Public Sub RemoveDupelicateDepartments() Dim oldID As String Dim newID As String Dim sqlStatement As String oldID = "DND-01" newID = "DEPA-04" sqlStatement = "UPDATE [Clean student table] SET [HomeDepartment]=" & newID & " WHERE [HomeDepartment]=" & oldID & ";" DoCmd.RunSQL sqlStatement & "" End Sub It looks to me as though it's taking in the string up to the - then nothing else. I dunno, that's why I'm asking lol. What should my code look like?

    Read the article

  • Linq to SQL Where clause based on field selected at runtime

    - by robasaurus
    I'm trying to create a simple reusable search using LINQ to SQL. I pass in a list of words entered in a search box. The results are then filtered based on this criteria. private IQueryable<User> BasicNameSearch(IQueryable<User> usersToSearch, ICollection<string> individualWordsFromSearch) { return usersToSearch .Where(user => individualWordsFromSearch.Contains(user.Forename.ToLower()) || individualWordsFromSearch.Contains(user.Surname.ToLower())); } Now I want this same search functionality on a different datasource and want to dynamically select the fields to apply the search to. For instance instead of IQueryable of Users I may have an IQueryable of Cars and instead of firstname and surname the search goes off Make and Model. Basically the goal is to reuse the search logic by dynamically selecting what to search on at runtime.

    Read the article

  • Error in MySQL Workbench Forward Engineer Stored Procedures

    - by colithium
    I am using MySQL Workbench (5.1.18 OSS rev 4456) to forward engineer a SQL CREATE script. For every stored procedure, the automatic process outputs something like: DELIMITER // USE DB_Name// DB_Name// DROP procedure IF EXISTS `DB_Name`.`SP_Name` // USE DB_Name// DB_Name// CREATE PROCEDURE `DB_Name`.`SP_Name` (id INT) BEGIN SELECT * FROM Table_Name WHERE Id = id; END// The two lines that are simply the database name followed by the delimiter are errors and are reported as such when running the script. As long as they are ignored, it looks like everything gets created just fine. But why would it add those lines? I am creating the database in the WAMP environment which uses MySQL 5.1.36

    Read the article

  • How to quickly analyse large MDB file

    - by Craig Johnston
    I need to know how to quickly analyse a large MDB file (about 1GB) to see which tables are causing it to be so big. Is there something will easily allow me to show a breakdown of which tables are responsible for how much data. I need to know whether it is just this one customer who is using the application differently, or whether there is genuinely a lot of data in the MDB. This MDB is currently causing the VB app to crash, and I need to know why it is so big so that I can maybe think about how to put some of the data into another 'archival' MDB. Migrating to SQL Server is not an option, unless the use of linked tables from an MDB is a realistic option.

    Read the article

  • Oracle: '= ANY()' vs. 'IN ()'

    - by eidylon
    Hi all, I just stumbled upon something in ORACLE SQL (not sure if it's in others), that I am curious about. I am asking here as a wiki, since it's hard to try to search symbols in google... I just found that when checking a value against a set of values you can do WHERE x = ANY (a, b, c) As opposed to the usual WHERE x IN (a, b, c) So I'm curious, what is the reasoning for these two syntaxes? Is one standard and one some oddball Oracle syntax? Or are they both standard? And is there a preference of one over the other for performance reasons, or ? Just curious what anyone can tell me about that '= ANY' syntax. CheerZ!

    Read the article

  • Javascript string syntax to write SQL

    - by sebastien leblanc
    I am writing an SQL query as a Javascript string like that: SQLdetail = 'SELECT [Avis SAP], Avis.[Ordre SAP], [Date Appel], [Heur Appel], Client_List![Code Client], [Numero Passerelle], [Designation Appel], Ordre![Metier], Ordre!Repercussion, Ordre!Objet, Ordre![Profil Panne], Ordre!Cause, Ordre![Sommaire Correctif], Ordre![Statut]' SQLdetail += ' FROM (Avis' SQLdetail += ' LEFT JOIN Client_List ON Avis.[Numero Client] = Client_List.[Numero Client])' SQLdetail += ' LEFT JOIN Ordre ON Avis.[Ordre SAP] = Ordre.[Ordre SAP] WHERE Avis.[Date Appel] BETWEEN #' & DateOne & '# AND #' & DateTwo & '#;' alert('SQLdetail:' + SQLdetail) and the last SQLdetail += somehow returns "0". Am I missing something in the syntax that just turns the whole string to a 0?

    Read the article

  • Generate non-identity primary key

    - by MikeWyatt
    My workplace doesn't use identity columns or GUIDs for primary keys. Instead, we retrieve "next IDs" from a table as needed, and increment the value for each insert. Unfortunatly for me, LINQ-TO-SQL appears to be optimized around using identity columns. So I need to query and update the "NextId" table whenever I perform an insert. For simplicity, I do this immediately creating the new object. Since all operations between creation of the data context and the call to SubmitChanges are part of one transaction, do I need to create a separate data context for retrieving next IDs? Each time I need an ID, I need to query and update a table inside a transaction to prevent multiple apps from grabbing the same value. Is a separate data context the only way, or is there something better I could try?

    Read the article

  • Multiple Inheritance in LINQtoSQL?

    - by Bumble Bee
    Guys, I have been surfing thru the web to find a way that I could use Multiple-Table-Inheritance in LINQ-To-SQL. But it looks like that it only supports single table inheritance which is not the best way to achieve inheritance in a ORM framework. I got to read that this will be addressed in next LINQ and Entity framework implementations. But how longer a stay we are talking about? In the meantime, if any of you guys have tried out a work-around implementation to achieve this, please let me know. And I thought of using my leisure time to come up with such an implementation so suggestions are welcome! /Bumble Bee

    Read the article

  • Handling Datetime with decimal '2010-02-14 20:18:58.313000000'

    - by AaronLS
    In SQL Server I have some textual data in varchar fields I am trying to convert to datetime's. The funny thing is this data at some point was in a datetime field, exported to flat file, and now I am reimporting it. The problem is it is in this format 2010-02-14 20:18:58.313000000 and the conversion to datetime fails. I have no idea how it ended up like this when it was originally extracted from a datetime column. Basically a table was exported to a flat file by someone else. The original table was lost. I am reimporting back from the flatfile. I could just drop the decimal but this would be like throwing out some of the data. I'd like to maintain as much precision as possible. How can I import this data from the varchar column back into a datetime column and preserve as much accuracy as possible?

    Read the article

  • schema for storing different varchar fields over time?

    - by Henry
    This app I'm working on needs to store some meta data fields about an entity. The problem is that we can already foresee that these fields are going to change a lot in the future. I'm using Hibernate (in ColdFusion) and each entity's properties are translated to one column in the entity table, but altering table columns later down the raod will be costly and error-prone right? Should I go for something like this? MetaDataField ----- metaDataFieldID (PK), name FieldValue ---------- EntityID (PK, FK), metaDataFieldID (PK, FK), value [varchar(255)] Is this common? Anything to watch out for? p.s. I also thought of using XML on SQL Server 05+. After talking to some ppl, seems like it is not a viable solution 'cause it will be too slow for doing certain query for reporting purposes.

    Read the article

  • Fastest way to calculate summary of database field

    - by Jo-wen
    I have a ms-sql table with the following structure: Name nvarchar; Sign nvarchar; Value int example contents: Test1, 'plus', 5 Test1, 'minus', 3 Test2, 'minus', 1 I would like to have totals per "Name". (add when sign = plus, subtract when sign = minus) result: Test1, 2 Test2, -1 I want to show these results (and update them when a new record is added)... and I'm looking for the fastest solution! [sproc? fast-forward cursor? calculate in .net?]

    Read the article

  • how to read the txt file from the database(line by line)

    - by Ranjana
    i have stored the txt file to sql server database . i need to read the txt file line by line to get the content in it. my code : DataTable dtDeleteFolderFile = new DataTable(); dtDeleteFolderFile = objutility.GetData("GetTxtFileonFileName", new object[] { ddlSelectFile.SelectedItem.Text }).Tables[0]; foreach (DataRow dr in dtDeleteFolderFile.Rows) { name = dr["FileName"].ToString(); records = Convert.ToInt32(dr["NoOfRecords"].ToString()); bytes = (Byte[])dr["Data"]; } FileStream readfile = new FileStream(Server.MapPath("txtfiles/" + name), FileMode.Open); StreamReader streamreader = new StreamReader(readfile); string line = ""; line = streamreader.ReadLine(); but here i have used the FileStream to read from the Particular path. but i have saved the txt file in byt format into my Database. how to read the txt file using the byte[] value to get the txt file content, instead of using the Path value.

    Read the article

  • nHibernate: Query tree nodes where self or ancestor matches condition

    - by Famous Nerd
    I have see a lot of competing theories about hierarchical queries in fluent-nHibernate or even basic nHibernate and how they're a difficult beast. Does anyone have any knowledge of good resources on the subject. I find myself needing to do queries similar to: (using a file system analog) select folderObjects from folders where folder.Permissions includes :myPermissionLevel or [any of my ancestors] includes :myPermissionLevel This is a one to many tree, no node has multiple parents. I'm not sure how to describe this in nHibernate specific terms or, even sql-terms. I've seen the phrase "nested sets" mentioned, is this applicable? I'm not sure. Can anyone offer any advice on approaches to writing this sort of nHibernate query?

    Read the article

  • django left join with null

    - by SledgehammerPL
    The model: class Product(models.Model): name = models.CharField(max_length = 128) def __unicode__(self): return self.name class Receipt(models.Model): name = models.CharField(max_length=128) components = models.ManyToManyField(Product, through='ReceiptComponent') class Admin: pass def __unicode__(self): return self.name class ReceiptComponent(models.Model): product = models.ForeignKey(Product) receipt = models.ForeignKey(Receipt) quantity = models.FloatField(max_length=9) unit = models.ForeignKey(Unit) def __unicode__(self): return unicode(self.quantity!=0 and self.quantity or '') + ' ' + unicode(self.unit) + ' ' + self.product.genitive The idea: there are a components on stock. I'd like to find out which recipes I can made with components which I have. It's not easy - but possible - I made a SQL view, which gets the solution. But I'm learning python and Django so I'd like to make it Django-style ;D The concept of solution: get the set of recipes which has at last one component: list_of_available_components = ReceiptComponent.objects.filter(product__in=list_of_available_products).distinct() list_of_related_receipts = Receipt.objects.filter(receiptcomponent__in = list_of_available_components).distinct() get recipes (from list_of_related_receipts) which has not at last one component list_of_incomplete_recipes = (SELECT * FROM drinkbook_receiptcomponent LEFT JOIN drinkstore_stock_products USING(product_id) WHERE drinkstore_stock_products.stock_id IS NULL AND receipt_id IN (SELECT receipt_id FROM drinkbook_receiptcomponent JOIN drinkstore_stock_products USING(product_id))) get recipes (from list_of_related_receipts) which are not in "list_of_incomplete_recipes"

    Read the article

  • gridview check duplicates not using sql

    - by Tomasusa
    I have a code: foreach (GridViewRow dr in gvCategories.Rows)<br/> { <br/> if (dr.Cells[0].Text == txtEnterCategory.Text.Trim())<br/> <br/> isError=true; <br/> <br/> } Debugging: dr.Cells[0].Text is always "", even there are records. How to use a loop to check each row to find if a record exists in the gridview not using sql?

    Read the article

  • Error in MySQL Workbench Forward Engineered Stored Procedures

    - by colithium
    I am using MySQL Workbench (5.1.18 OSS rev 4456) to forward engineer a SQL CREATE script. For every stored procedure, the automatic process outputs something like: DELIMITER // USE DB_Name// DB_Name// DROP procedure IF EXISTS `DB_Name`.`SP_Name` // USE DB_Name// DB_Name// CREATE PROCEDURE `DB_Name`.`SP_Name` (id INT) BEGIN SELECT * FROM Table_Name WHERE Id = id; END// The two lines that are simply the database name followed by the delimiter are errors and are reported as such when running the script. As long as they are ignored, it looks like everything gets created just fine. But why would it add those lines? I am creating the database in the WAMP environment which uses MySQL 5.1.36

    Read the article

  • Dynamically removing records when certain columns = 0; data cleansing

    - by cdburgess
    I have a simple card table: CREATE TABLE `users_individual_cards` ( `id` int(11) NOT NULL AUTO_INCREMENT, `user_id` char(36) NOT NULL, `individual_card_id` int(11) NOT NULL, `own` int(10) unsigned NOT NULL, `want` int(10) unsigned NOT NULL, `trade` int(10) unsigned NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `user_id` (`user_id`,`individual_card_id`), KEY `user_id_2` (`user_id`), KEY `individual_card_id` (`individual_card_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1; I have ajax to add and remove the records based on OWN, WANT, and TRADE. However, if the user removes all of the OWN, WANT, and TRADE cards, they go to zero but it will leave the record in the database. I would prefer to have the record removed. Is checking after each "update" to see if all the columns = 0 the only way to do this? Or can I set a conditional trigger with something like: //psuedo sql AFTER update IF (OWN = 0, WANT = 0, TRADE = 0) DELETE What is the best way to do this? Can you help with the syntax?

    Read the article

  • Dynamic Database connection

    - by gsieranski
    I have multiple client databases that I need to hit on the fly and I am having trouble getting the code I have to work. At first I was just storing a connection string in a clinet object in the db, pulling it out based on logged-in user, and passing it to linq data context constructor. This works fine in my development enviorment but is failing on the Winhost server I am using. It is running SQL 2008. I am getting a "The current configuration system does not support user scoped settings." error. Any help or guidance on this issue is greatly appreciated. Greg

    Read the article

  • T-SQL foreign key check constraint

    - by PaN1C_Showt1Me
    When you create a foreign key constraint in a table and you create the script in MS SQL Management Studio, it looks like this. ALTER TABLE T1 WITH CHECK ADD CONSTRAINT FK_T1 FOREIGN KEY(project_id) REFERENCES T2 (project_id) GO ALTER TABLE T1 CHECK CONSTRAINT FK_T1 GO What I don't understand is what purpose has the second alter with check constraint. Isn't creating the FK constraint enough? Do you have to add the check constraint to assure reference integrity ? Another question: how would it look like then when you'd write it directly in the column definition? CREATE TABLE T1 ( my_column INT NOT NULL CONSTRAINT FK_T1 REFERENCES T2(my_column) ) Isn't this enough?

    Read the article

< Previous Page | 869 870 871 872 873 874 875 876 877 878 879 880  | Next Page >