Search Results

Search found 1805 results on 73 pages for 'varchar'.

Page 45/73 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • C# SQL Data Adapter System.Data.StrongTypingException

    - by René
    I get my data from SQL to Dataset with Fill. It's just one table with two columns (CategoryId (int) and CategoryName (varchar)). When I look at my dataset after fill method, CategoryId Columns seems to be correct. But in the CategoryName I have a *System.Data.StrongTypingExceptio*n. What could that mean? Any Ideas?

    Read the article

  • TSQL: Variable scope and EXEC()

    - by Joel
    declare @test varchar(20) set @test = 'VALUE' exec(' select '+@test+' ') This returns: Invalid column name 'VALUE'. Is there an alternate method to display the variable value on the select statement?

    Read the article

  • How to convert a table column to another data type

    - by holden
    I have a column with the type of Varchar in my Postgres database which I meant to be integers... and now I want to change them, unfortunately this doesn't seem to work using my rails migration. change_column :table1, :columnB, :integer So I tried doing this: execute 'ALTER TABLE "table1" ALTER COLUMN "columnB" TYPE integer USING CAST(columnB AS INTEGER)' but cast doesn't work in this instance because some of the column are null... any ideas?

    Read the article

  • service broker message process order

    - by Blootac
    Everywhere I read says that messages handled by the service broker are processed in the order that they arrive, and yet if you create a table, message type, contract, service etc , and on activation have a stored proc that waits for 2 seconds and inserts the msg into a table, set the max queue readers to 5 or 10, and send 20 odd messages I can see in the table that they are inserted out of order even though when I insert them into the queue and look at the contents of the queue I can see that the messages are all in the right order. Is it due to the delay waitfor waiting for the nearest second and each thread having different subsecond times and then fighting for a lock or something? The reason i've got a delay in there is to simulate delays with joins etc Thanks demo code: --create the table and service broker CREATE TABLE test ( id int identity(1,1), contents varchar(100) ) CREATE MESSAGE TYPE test CREATE CONTRACT mycontract ( test sent by initiator ) GO CREATE PROCEDURE dostuff AS BEGIN DECLARE @msg varchar(100); RECEIVE TOP (1) @msg = message_body FROM myQueue IF @msg IS NOT NULL BEGIN WAITFOR DELAY '00:00:02' INSERT INTO test(contents)values(@msg) END END GO ALTER QUEUE myQueue WITH STATUS = ON, ACTIVATION ( STATUS = ON, PROCEDURE_NAME = dostuff, MAX_QUEUE_READERS = 10, EXECUTE AS SELF ) create service senderService on queue myQueue ( mycontract ) create service receiverService on queue myQueue ( mycontract ) GO --********************************************************** --now insert lots of messages to the queue DECLARE @dialog_handle uniqueidentifier BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>1</test>'); BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>2</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>3</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>4</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>5</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>6</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>7</test>')

    Read the article

  • IP address numbers in MySQL subquery

    - by Iain Collins
    I have a problem with a subquery involving IPV4 addresses stored in MySQL (MySQL 5.0). The IP addresses are stored in two tables, both in network number format - e.g. the format output by MySQL's INET_ATON(). The first table ('events') contains lots of rows with IP addresses associated with them, the second table ('network_providers') contains a list of provider information for given netblocks. events table (~4,000,000 rows): event_id (int) event_name (varchar) ip_address (unsigned 4 byte int) network_providers table (~60,000 rows): ip_start (unsigned 4 byte int) ip_end (unsigned 4 byte int) provider_name (varchar) Simplified for the purposes of the problem I'm having, the goal is to create an export along the lines of: event_id,event_name,ip_address,provider_name If do a query along the lines of either of the following, I get the result I expect: SELECT provider_name FROM network_providers WHERE INET_ATON('192.168.0.1') >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1 SELECT provider_name FROM network_providers WHERE 3232235521 >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1 That is to say, it returns the correct provider_name for whatever IP I look up (of course I'm not really using 192.168.0.1 in my queries). However, when performing this same query as a subquery, in the following manner, it doesn't yield the result I would expect: SELECT event.id, event.event_name, (SELECT provider_name FROM network_providers WHERE event.ip_address >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1) as provider FROM events Instead the a different (incorrect) value for network_provider is returned - over 90% (but curiously not all) values returned in the provider column contain the wrong provider information for that IP. Using event.ip_address in a subquery just to echo out the value confirms it contains the value I'd expect and that the subquery can parse it. Replacing event.ip_address with an actual network number also works, just using it dynamically in the subquery in this manner that doesn't work for me. I suspect the problem is there is something fundamental and important about subqueries in MySQL that I don't get. I've worked with IP addresses like this in MySQL quite a bit before, but haven't previously done lookups for them using a subquery. The question: I'd really appreciate an example of how I could get the output I want, and if someone here knows, some enlightenment as to why what I'm doing doesn't work so I can avoid making this mistake again. Notes: The actual real-world usage I'm trying to do is considerably more complicated (involving joining two or three tables). This is a simplified version, to avoid overly complicating the question. Additionally, I know I'm not using a between on ip_start & ip_end - that's intentional (the DB's can be out of date, and such cases the owner in the DB is almost always in the next specified range and 'best guess' is fine in this context) however I'm grateful for any suggestions for improvement that relate to the question. Efficiency is always nice, but in this case absolutely not essential - any help appreciated.

    Read the article

  • Help with MySQL query

    - by Michael S.
    I have a table that contains the next columns: ip(varchar 255), index(bigint 20), time(timestamp) each time something is inserted there, the time column gets current timestamp. I want to run a query that returns all the rows that have been added in the last 24 hours. This is what I try to execute: SELECT ip, index FROM users WHERE ip = 'some ip' AND TIMESTAMPDIFF(HOURS,time,NOW()) < 24 And it doesn't work. Can someone help me out? Thanks :)

    Read the article

  • Large number of UPDATE queries slowing down page

    - by Bryan Lewis
    I am reading and validating large fixed-width text files (range from 10-50K lines) that are submitted via our ASP.net website (coded in VB.Net). I do an initial scan of the file to check for basic issues (line length, etc). Then I import each row into a MS SQL table. Each DB rows basically consists of a record_ID (Primary, auto-incrementing) and about 50 varchar fields. After the insert is done, I run a validation function on the file that checks each field in each row based on a bunch of criteria (trimmed length, isnumeric, range checks, etc). If it finds an error in any field, it inserts a record into the Errors table, which has an error_ID, the record_ID and an error message. In addition, if the field fails in a particular way, I have to do a "reset" on that field. A reset might consist of blanking the entire field, or simply replacing the value with another value (e.g. replacing the string with a new one that has all illegals chars taken out). I have a 5,000 line test file. The upload, initial check, and import takes about 5-6 seconds. The detailed error check and insert into the Errors table takes about 5-8 seconds (this file has about 1200 errors in it). However, the "resets" part takes about 40-45 seconds for 750 fields that need to be reset. When I comment out the resets function (returning immediately without actually calling the UPDATE stored proc), the process is very fast. With the resets turned on, the pages take 50 seconds to return. My UPDATE stored proc is using some recommended code from http://sommarskog.se/dynamic_sql.html, whereby it uses CASE instead of dynamic SQL: UPDATE dbo.Records SET dbo.Records.file_ID = CASE @field_name WHEN 'file_ID' THEN @field_value ELSE file_ID END, . . (all 50 varchar field CASE statements here) . WHERE dbo.Records.record_ID = @record_ID Is there any way I can help my performance here. Can I somehow group all of these UPDATE calls into a single transaction? Should I be reworking the UPDATE query somehow? Or is it just sheer quantity of 750+ UPDATEs and things are just slow (it's a quad proc server with 8GB ram). Any suggestions appreciated.

    Read the article

  • MYSQL: how to search for fields that hold values sep. by commas?

    - by andufo
    hi. i have 2 tables: tags (id_tag,name) news (id,title,data,tags) The field newstags is a varchar(255). Im planning to put data like this in that field: "1,7,34" That means that a particular row in news is linked to tags 1, 7 and 34 from the tags table. Then, how can i search for ALL news records that have the 34 value (among others) in the tags field? Is there a better way to do this?

    Read the article

  • Optimal way to convert to date

    - by IMHO
    I have legacy system where all date fields are maintained in YMD format. Example: 20101123 this is date: 11/23/2010 I'm looking for most optimal way to convert from number to date field. Here is what I came up with: declare @ymd int set @ymd = 20101122 select @ymd, convert(datetime, cast(@ymd as varchar(100)), 112) This is pretty good solution but I'm wandering if someone has better way doing it

    Read the article

  • MySQL: Which is faster — INSTR or LIKE?

    - by Grekker
    If your goal is to test if a string exists in a MySQL column (of type 'varchar', 'text', 'blob', etc) which of the following is faster / more efficient / better to use, and why? Or, is there some other method that tops either of these? INSTR( columnname, 'mystring' ) > 0 vs columnname LIKE '%mystring%'

    Read the article

  • Concatenate SQL script from Powershell

    - by Jeff Meatball Yang
    I have a bunch of (50+) XML files in a directory that I would like to insert into a SQL server 2008 table. How can I create a SQL script from the command prompt or Powershell that will let me insert the files into a simple table with the following schema: XMLDataFiles ( xmlFileName varchar(255) , content xml ) All I need is for something to generate a script with a bunch of insert statements. Right now, I'm contemplating writing a silly little .NET console app to write the SQL script. Thanks.

    Read the article

  • SQL SERVER - Understanding how MIN(text) works.

    - by tmercer
    I'm doing a little digging and looking for a explanation on how SQL server evaluates MIN(Varchar). I found this remark in BOL: MIN finds the lowest value in the collating sequence defined in the underlying database So if I have a table that has one row with the following values: Data AA AB AC Doing a SELECT MIN(DATA) would return back AA. I just want to understand the why behind this and understand the BOL a little better. Thanks!

    Read the article

  • Most optimal way to convert to date

    - by IMHO
    I have legacy system where all date fields are maintained in YMD format. Example: 20101123 this is date: 11/23/2010 I'm looking for most optimal way to convert from number to date field. Here is what I came up with: declare @ymd int set @ymd = 20101122 select @ymd, convert(datetime, cast(@ymd as varchar(100)), 112) This is pretty good solution but I'm wandering if someone has better way doing it

    Read the article

  • Memory Allocation Error in MySQL

    - by Chinjoo
    I am using MySql ODBC driver with .Net 3.5. I have created a stored procedure in MySQl which accepts around 15 parameters with types like datetime, varchar, Int32, Int64 etc.. When I run the SP from the query window with the arguments provided, it runs fine. But whwn I test using the .Net application, it gives exception with "Memory allocation error", MySQL native error code is 4001. Any help will be much appreciated.

    Read the article

  • HTML tags in mysql text field

    - by paracaudex
    I'm creating a database with what I anticipate will be a long (perhaps several paragraphs for some tuples) attribute. I'm assigning it text instead of varchar. I have two questions: Should I give a maximum value for the text field? Is this necessary? Is it useful? Since the contents of this field will be displayed on a website in HTML, do I need to include paragraph tags for paragraph formatting when I enter records into mysql?

    Read the article

  • how to design this relation in a DB schema

    - by raticulin
    I have a table Car in my db, one of the columns is purchaseDate. I want to be able to tag every car with a number of Policies (limited to 10 policies). Each policy has a time to life (ttl, a duration of time, like '5 years', '10 months' etc), that is, for how long since the car's purchaseDate the policy can be applied. I need to perform the following actions: when inserting a Car, it will be set with a number of Policies (at least one is set) sometimes a Car will be updated to add/remove a Policy searches must be done taking into account date/policies, for example: 'select all cars that are not covered by any policy as of today' My current design is (pol0..pol9 are the policies): CREATE TABLE Car ( id int NOT NULL IDENTITY(1,1), purchaseDate datetime NOT NULL, //more stuff... pol0 smallint default NULL, pol1 smallint default NULL, pol2 smallint default NULL, pol3 smallint default NULL, pol4 smallint default NULL, pol5 smallint default NULL, pol6 smallint default NULL, pol7 smallint default NULL, pol8 smallint default NULL, pol9 smallint default NULL, PRIMARY KEY (id) ) CREATE TABLE Policy ( id smallint NOT NULL, name varchar(50) collate Latin1_General_BIN NOT NULL, ttl varchar(100) collate Latin1_General_BIN NOT NULL, PRIMARY KEY (id) ) The problem I am facing is that the sql to perform the query above is a nightmare to write. As I don't know in which column each policy can be, so I have to check all columns for every policy etc etc. So I am wondering wether it is worth changing this. My questions are: The smallint as Policy id was chosen instead of an 'int IDENTITY' in order to save some space as there are going to be millions of Car records. It just adds complexity when creating a Policy as we must handle the id etc. Was it worth doing this? I am thinking that maybe there is a much better design? Obviously we could move the policy/car relation to its own table CarPolicy, benefits would be: no limit on 10 policies per car adding/removing etc much easier when only the default policy is applied (when no others are applied one called Default policy is applied), we could signal that by not having any entry in CarPolicy, now this is just done inserting the Default policy id in one of the columns. The cons are that we would need to change the DB, ORM classes etc. What would you recommend? Maybe there is another smart way to implement this that we are not aware without using the CarPolicy table?

    Read the article

  • Mysql select - improve performances

    - by realshadow
    Hey, I am working on an e-shop which sells products only via loans. I display 10 products per page in any category, each product has 3 different price tags - 3 different loan types. Everything went pretty well during testing time, query execution time was perfect, but today when transfered the changes to the production server, the site "collapsed" in about 2 minutes. The query that is used to select loan types sometimes hangs for ~10 seconds and it happens frequently and thus it cant keep up and its hella slow. The table that is used to store the data has approximately 2 milion records and each select looks like this: SELECT * FROM products_loans WHERE KOD IN("X17/Q30-10", "X17/12", "X17/5-24") AND 369.27 BETWEEN CENA_OD AND CENA_DO; 3 loan types and the price that needs to be in range between CENA_OD and CENA_DO, thus 3 rows are returned. But since I need to display 10 products per page, I need to run it trough a modified select using OR, since I didnt find any other solution to this. I have asked about it here, but got no answer. As mentioned in the referencing post, this has to be done separately since there is no column that could be used in a join (except of course price and code, but that ended very, very badly). Here is the show create table, kod and CENA_OD/CENA_DO very indexed via INDEX. CREATE TABLE `products_loans` ( `KOEF_ID` bigint(20) NOT NULL, `KOD` varchar(30) NOT NULL, `AKONTACIA` int(11) NOT NULL, `POCET_SPLATOK` int(11) NOT NULL, `koeficient` decimal(10,2) NOT NULL default '0.00', `CENA_OD` decimal(10,2) default NULL, `CENA_DO` decimal(10,2) default NULL, `PREDAJNA_CENA` decimal(10,2) default NULL, `AKONTACIA_SUMA` decimal(10,2) default NULL, `TYP_VYHODY` varchar(4) default NULL, `stage` smallint(6) NOT NULL default '1', PRIMARY KEY (`KOEF_ID`), KEY `CENA_OD` (`CENA_OD`), KEY `CENA_DO` (`CENA_DO`), KEY `KOD` (`KOD`), KEY `stage` (`stage`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 And also selecting all loan types and later filtering them trough php doesnt work good, since each type has over 50k records and the select takes too much time as well... Any ides about improving the speed are appreciated. Edit: Here is the explain +----+-------------+----------------+-------+---------------------+------+---------+------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+-------+---------------------+------+---------+------+--------+-------------+ | 1 | SIMPLE | products_loans | range | CENA_OD,CENA_DO,KOD | KOD | 92 | NULL | 190158 | Using where | +----+-------------+----------------+-------+---------------------+------+---------+------+--------+-------------+ I have tried the combined index and it improved the performance on the test server from 0.44 sec to 0.06 sec, I cant access the production server from home though, so I will have to try it tomorrow.

    Read the article

  • how to enter manual time stamp in get date ()

    - by Arunachalam
    how to enter manual time stamp in get date () ? select conver(varchar(10),getdate(),120) returns 2010-06-07 now i want to enter my own time stamp in this like 2010-06-07 10.00.00.000 i m using this in select * from sample table where time_stamp ='2010-06-07 10.00.00.000' since i m trying to automate this query i need the current date but i need different time stamp can it be done .

    Read the article

  • generic Mysql stored procedure

    - by psu
    Hi, I have the fallowing stored procedure: CREATE PROCEDURE `get`(IN tb VARCHAR(50), IN id INTEGER) BEGIN SELECT * FROM tb WHERE Indx = id; END// When I call get(user,1) I get the following: ERROR 1054 (42S22): Unknown column 'user' in 'field list'

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >