Search Results

Search found 130 results on 6 pages for 'collate'.

Page 4/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • CREATE mysql database with default InnoDB tables?

    - by memilanuk
    Hello, I've been working on writing a SQL statement to create a MySQL database with several default options, including default character set and default collate. Is it possible to add syntax to make the default engine type for tables in this database to be innodb? I've been looking through the MySQL manual for v.5.1 and I've found the statement 'ENGINE=innodb' which would be appended to a CREATE TABLE statement... but I haven't found anything related to a CREATE DATABASE statement. Is there a normal way to do this as part of the database creation, or does it need to be specified on a table-by-table basis? Thanks, Monte

    Read the article

  • Run Sql file in MYSQL PHPMyadmin

    - by Dev
    Hi All, I have written the SQL file with on excecuting it is throwing the error as mysql @"C:\Documents and Settings\Hemant\Desktop\create_tables.sql"; ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '@"C:\ Documents and Settings\Hemant\Desktop\create_tables.sql"' at line 1 on line 1 code is CREATE DATABASE IF NOT EXISTS test; DEFAULT CHARACTER SET latin1 COLLATE latin1_swedish_ci; USE test; please let me know is i am missing something

    Read the article

  • Programmatically access document properties

    - by ngm
    Is there a way in which I can programmatically access the document properties of a Word 2007 document? I am open to using any language for this, but ideally it might be via a PowerShell script. My overall aim is to traverse the documents somewhere on a filesystem, parse some document properties from these documents, and then collate all of these properties back together into a new Word document. (I essentially want to automatically create a document which is a list of all documents beneath a certain folder of the filesystem; and this list would contain such things as the Title, Abstract and Author document properties; the CreateDate field; etc. for each document)

    Read the article

  • Pervasive SQL german Umlauts Problem

    - by cordellcp3
    Hi there, I'm using the Pervasive SQL - ADO.NET 3.5 DataProvider for retrieving data out of the PSQL DB and I've noticed that the german umlauts (äöüÄÖÜ etc.) are not represented correctly in the PSQLDataReader, but in the Pervasive Control Center (similar to the sql management studio) the umlauts are all correct. Is there anything similar to the TSQL "SET LANGUAGE"-command? I havn't found something like that for Pervasive SQL. Googling this issue wasn't successful at all, too. Although I did find some tips with a file called upper.alt or collate.cfg, but don't know how to use this files and I coudn`t find them in my installation. (I'm totally new to Pervasive...) I hope that someone on here could help me with that. Thanks in advance

    Read the article

  • A table that has relation to itself issue

    - by Mostafa
    Hi , I've defined table with this schema : CREATE TABLE [dbo].[Codings]( [Id] [int] IDENTITY(1,1) NOT NULL, [ParentId] [int] NULL, [CodeId] [int] NOT NULL, [Title] [nvarchar](50) COLLATE Arabic_CI_AI NOT NULL, CONSTRAINT [PK_Codings] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON [PRIMARY] And fill it up with data like this : Id ParentId CodeId Title ----------- ----------- ----------- ---------- 1 NULL 0 Gender 2 1 1 Male 3 1 2 Female 4 NULL 0 Educational Level 5 4 1 BS 6 4 2 MS 7 4 3 PHD Now , I'm looking for a solution , in order , When i delete a record that is parent ( like Id= 1 or 4 ), It delete all child automatically ( all records that their ParentId is 1 or 4 ) . I supposed i can do it via relation between Id and Parent Id ( and set cascade for delete rule ) , But when i do that in MMS , the Delete Rule or Update Rule in Properties is disabled . My question is , What can i do to accomplish this ? Thank you

    Read the article

  • CakePHP. How can i make a model test in a table with another primary key?

    - by Marcelo
    I have this table CREATE TABLE myexamples.problems ( id INT, name VARCHAR(45) NULL , pk_id INT AUTO_INCREMENT PRIMARY KEY ); But when I try test a model in cakephp, it fails because the table has two autoincrement attributes. The following query CREATE TABLE `test_suite_problems` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(45) DEFAULT NULL, `pk_id` int(11) NOT NULL AUTO_INCREMENT, PRIMARY KEY (`pk_id`) ) DEFAULT CHARSET=latin1, COLLATE=latin1_swedish_ci, ENGINE=InnoDB; raise this error: "1075: Incorrect table definition; there can be only one auto column and it must be defined as a key" I have in the model class <?php class Problem extends AppModel { var $name = 'Problem'; var $displayField = 'name'; var $primaryKey='problems'; } ?> But I don't know how to make the field ID not having an autoincrement attribute, and I can't change the table structure.

    Read the article

  • Changing MSSQL Database sorting

    - by plaisthos
    I have a request to change the collation of a MS SQL Database: ALTER DATABASE solarwind95 collate SQL_Latin1_General_CP1_CI_AS but I get this strange error: Meldung 5075, Ebene 16, Status 1, Zeile 1 Das 'Spalte'-Objekt 'CustomPollerAssignment.PollerID' ist von 'Datenbanksortierung' abhängig. Die Datenbanksortierung kann nicht geändert werden, wenn ein schemagebundenes Objekt von ihr abhängig ist. Entfernen Sie die Anhängigkeiten der Datenbanksortierung, und wiederholen Sie den Vorgang. Sorry for the german errror message. I do not know how to switch the language to english, but here is a translation: Translation: Message 5075, Layer 16, Status 1, Row 1 The 'column' object 'CustomPollerAssignment.PollerID' depends on 'Database sorting. The database sorting cannot be changed if a schema bound object depends on it. Remove the dependency of the database sortieren and retry. I got a ton more of the errors like that.

    Read the article

  • Embarrassingly parallel workflow creates too many output files

    - by Hooked
    On a Linux cluster I run many (N > 10^6) independent computations. Each computation takes only a few minutes and the output is a handful of lines. When N was small I was able to store each result in a separate file to be parsed later. With large N however, I find that I am wasting storage space (for the file creation) and simple commands like ls require extra care due to internal limits of bash: -bash: /bin/ls: Argument list too long. Each computation is required to run through a qsub scheduling algorithm so I am unable to create a master program which simply aggregates the output data to a single file. The simple solution of appending to a single fails when two programs finish at the same time and interleave their output. I have no admin access to the cluster, so installing a system-wide database is not an option. How can I collate the output data from embarrassingly parallel computation before it gets unmanageable?

    Read the article

  • Variable collation with MySQL stored function?

    - by Chad Johnson
    I want to do something like this in a stored procedure: IF case_sensitive = FALSE THEN SET search_collation = "utf8_unicode_ci"; ELSE SET search_collation = "utf8_bin"; END IF; INSERT INTO TABLE1 (field1, field2) SELECT * FROM TABLE 2 WHERE some_field LIKE '%rarf%' collate search_collation; However, when I do this, I get ERROR 1273 (HY000): Unknown collation: 'search_collation' Also, if I do what's suggested at http://stackoverflow.com/questions/1680850/mysql-stored-procedures-use-a-variable-as-the-database-name-in-a-cursor-declara/2070021#2070021 I get Dynamic SQL is not allowed in stored function or trigger How can I use a dynamic collation?

    Read the article

  • SQL select all items of an owner from an item-to-owner table

    - by kdobrev
    I have a table bike_to_owner. I would like to select current items owned by a specific user. Table structure is CREATE TABLE IF NOT EXISTS `bike_to_owner` ( `bike_id` int(10) unsigned NOT NULL, `user_id` int(10) unsigned NOT NULL, `last_change_date` date NOT NULL, PRIMARY KEY (`bike_id`,`user_id`,`last_change_date`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; In the profile page of the user I would like to display all his/her current possessions. I wrote this statement: SELECT `bike_id`,`user_id`,max(last_change_date) FROM `bike_to_owner` WHERE `user_id` = 3 group by `last_change_date` but i'm not quite sure it works correctly in all cases. Can you please verify this is correct and if not suggest me something better. Using php/mysql. Thanks in advance!

    Read the article

  • How to handle large table in MySQL ?

    - by Frantz Miccoli
    I've a database used to store items and properties about these items. The number of properties is extensible, thus there is a join table to store each property associated to an item value. CREATE TABLE `item_property` ( `property_id` int(11) NOT NULL, `item_id` int(11) NOT NULL, `value` double NOT NULL, PRIMARY KEY (`property_id`,`item_id`), KEY `item_id` (`item_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; This database has two goals : storing (which has first priority and has to be very quick, I would like to perform many inserts (hundreds) in few seconds), retrieving data (selects using item_id and property_id) (this is a second priority, it can be slower but not too much because this would ruin my usage of the DB). Currently this table hosts 1.6 billions entries and a simple count can take up to 2 minutes... Inserting isn't fast enough to be usable. I'm using Zend_Db to access my data and would really be happy if you don't suggest me to develop any php side part. Thanks for your advices !

    Read the article

  • Best way to correct garbled data caused by false encoding

    - by ercan
    Hi all, I have a set of data that contains garbled text fields because of encoding errors during many import/exports from one database to another. Most of the errors were caused by converting UTF-8 to ISO-8859-1. Strangely enough, the errors are not consistent: the word 'München' appears as 'München' in some place and as 'MÃœnchen'. Is there a trick in SQL server to correct this kind of crap? The first thing that I can think of is to exploit the COLLATE clause, so that ü is interpreted as ü, but I don't exactly know how. If it isn't possible to make it in the DB level, do you know any tool that helps for a bulk correction? (no manual find/replace tool, but a tool that guesses the garbled text somehow and correct them)

    Read the article

  • Unique constraint with nullable column

    - by Álvaro G. Vicario
    I have a table that holds nested categories. I want to avoid duplicate names on same-level items (i.e., categories with same parent). I've come with this: CREATE TABLE `category` ( `category_id` int(10) unsigned NOT NULL AUTO_INCREMENT, `category_name` varchar(100) NOT NULL, `parent_id` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`category_id`), UNIQUE KEY `category_name_UNIQUE` (`category_name`,`parent_id`), KEY `fk_category_category1` (`parent_id`,`category_id`), CONSTRAINT `fk_category_category1` FOREIGN KEY (`parent_id`) REFERENCES `category` (`category_id`) ON DELETE SET NULL ON UPDATE CASCADE ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_spanish_ci Unluckily, category_name_UNIQUE does not enforce my rule for root level categories (those where parent_id is NULL). Is there a reasonable workaround?

    Read the article

  • Why MySQL multiple-column index is overpopulated?

    - by actual
    Consider following MySQL table: CREATE TABLE `log` ( `what` enum('add', 'edit', 'remove') CHARACTER SET ascii COLLATE ascii_bin NOT NULL, `with` int(10) unsigned NOT NULL, KEY `with_what` (`with`,`what`) ) ENGINE=InnoDB; INSERT INTO `log` (`what`, `with`) VALUES ('add', 1), ('edit', 1), ('add', 2), ('remove', 2); As I understand, with_what index must have 2 unique entries on its first with level and 3 unique entries in what "subindex". But MySQL reports 4 unique entries for each level. In other words, number of unique elements for each level is always equal to number of rows in log table. Is that a bug, a feature or my misunderstanding?

    Read the article

  • Find node level in a tree

    - by Álvaro G. Vicario
    I have a tree (nested categories) stored as follows: CREATE TABLE `category` ( `category_id` int(10) unsigned NOT NULL AUTO_INCREMENT, `category_name` varchar(100) NOT NULL, `parent_id` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`category_id`), UNIQUE KEY `category_name_UNIQUE` (`category_name`,`parent_id`), KEY `fk_category_category1` (`parent_id`,`category_id`), CONSTRAINT `fk_category_category1` FOREIGN KEY (`parent_id`) REFERENCES `category` (`category_id`) ON DELETE SET NULL ON UPDATE CASCADE ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_spanish_ci I need to feed my client-side language (PHP) with node information (child+parent) so it can build the tree in memory. I can tweak my PHP code but I think the operation would be way simpler if I could just retrieve the rows in such an order that all parents come before their children. I could do that if I knew the level for each node: SELECT category_id, category_name, parent_id FROM category ORDER BY level -- No `level` column so far :( Can you think of a way (view, stored routine or whatever...) to calculate the node level? I guess it's okay if it's not real-time and I need to recalculate it on node modification.

    Read the article

  • postgres min function performance

    - by wutzebaer
    hi i need the lowest value for runnerId this query: SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ; takes 80ms (1968 result rows) this SELECT min("runnerId") FROM betlog WHERE "marketId" = '107416794' ; takes 1600ms is there a faster way to find the minimum, or should i calc the min in my java programm? "Result (cost=100.88..100.89 rows=1 width=0)" " InitPlan 1 (returns $0)" " -> Limit (cost=0.00..100.88 rows=1 width=9)" " -> Index Scan using runneridindex on betlog (cost=0.00..410066.33 rows=4065 width=9)" " Index Cond: ("runnerId" IS NOT NULL)" " Filter: ("marketId" = 107416794::bigint)" CREATE INDEX marketidindex ON betlog USING btree ("marketId" COLLATE pg_catalog."default"); another idea SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ORDER BY "runnerId" LIMIT 1 >1600ms SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ORDER BY "runnerId" >>100ms how can a limit slow the query down?

    Read the article

  • Getting a query to index seek (rather than scan)

    - by PaulB
    Running the following query (SQL Server 2000) the execution plan shows that it used an index seek and Profiler shows it's doing 71 reads with a duration of 0. select top 1 id from table where name = '0010000546163' order by id desc Contrast that with the following with uses an index scan with 8500 reads and a duration of about a second. declare @p varchar(20) select @p = '0010000546163' select top 1 id from table where name = @p order by id desc Why is the execution plan different? Is there a way to change the second method to seek? thanks EDIT Table looks like CREATE TABLE [table] ( [Id] [int] IDENTITY (1, 1) NOT NULL , [Name] [varchar] (13) COLLATE Latin1_General_CI_AS NOT NULL) Id is primary clustered key There is a non-unique index on Name and a unique composite index on id/name There are other columns - left them out for brevity

    Read the article

  • SQL insert default value

    - by Stan
    Say if I have a table like CREATE TABLE [Message] ( [MessageIdx] [int] IDENTITY (1, 1) NOT NULL , [Message] [varchar] (1024) COLLATE Latin1_General_CI_AS NOT NULL , [ValidUntil] [datetime] NULL , CONSTRAINT [PK_Message] PRIMARY KEY CLUSTERED ( [MessageIdx] ) WITH FILLFACTOR = 90 ON [PRIMARY] ) ON [PRIMARY] GO I am trying to insert value without specify column names explicitly. Below statement causes error. How can I do that? Thanks. set identity_insert caconfig..fxmessage on; insert into message values (DEFAULT,'blah',DEFAULT); set identity_insert caconfig..fxmessage off;

    Read the article

  • Introducing AutoVue Document Print Service

    - by celine.beck
    We recently announced the availability of our new AutoVue Document Print Service products. For more information, please read the article entitled Print Any Document Type with AutoVue Document Print Services that was posted on our blog. The AutoVue Document Print Service products help address a trivial, yet very common challenge: printing and batch printing documents. The AutoVue Document Print Service is a Web-Services based interface, which allows developers to complement their print server solutions by leveraging AutoVue's printing capabilities within broader enterprise applications like Asset Lifecycle Management, Product Lifecycle Management, Enterprise Content Management solutions, etc. This means that you can leverage the AutoVue Document Print Service products as part of your printing solution to automate the printing of virtually any document type required in any business process. Clients that consume AutoVue's Document Print Service can be written in any language (for example Java or .NET) as long as they understand Web Services Description Language (WSDL) and communicate using Simple Object Access Protocol (SOAP). The print solution consists of three main components, as described in the diagram below: a print server (not included in the AutoVue Document Print Service offering) that will interact with your application to identify the files that need to be printed, the printer to send each file, as well as the print options needed for each file (paper size, page orientation, etc), and collate the print job requests. The print server will also take care of calling the AutoVue Document Print Service to perform the actual printing. The AutoVue Document Print Services send files to a printer for printing. The AutoVue Document Print Service products leverage AutoVue's format- and platform agnostic technology to let you print/batch virtually any type of files, without requiring the authoring application installed on your machine. and Printers As shown above, you can trigger printing from your application either programmatically through automated business processes or manually through human interaction. If documents that need to be printed from your application are stored inside a content repository/Document Management System (DMS) such as Oracle Universal Content Management System (UCM), then the Print Server will need to identify the list of documents and pass the ID of each document to the AutoVue DPS to print. In this case, AutoVue DPS leverages the AutoVue VueLink integration (note: AutoVue VueLink integrations are pre-packaged AutoVue integrations with most common enterprise systems. Check our Website for more information on the subject) to fetch documents out of the document management system for printing. In lieu of the AutoVue VueLink integration, you can also leverage the AutoVue Integration Software Development Kit (iSDK) to build your own connector. If the documents you need to print from your application are not stored in a content management system, the Print Server will need to ensure that files are made available to the AutoVue Document Print Service. The Print Server could for example fetch the files out of your application or an extension to the application could be developed to fetch the files and make them available to the AutoVue DPS. More information on methods to pass on file information to the AutoVue Document Print Service products can be found in the AutoVue Document Print Service Overview documentation available on the Oracle Technology Network. Related article: Any Document Type with AutoVue Document Print Services

    Read the article

  • Sniffing out SQL Code Smells: Inconsistent use of Symbolic names and Datatypes

    - by Phil Factor
    It is an awkward feeling. You’ve just delivered a database application that seems to be working fine in production, and you just run a few checks on it. You discover that there is a potential bug that, out of sheer good chance, hasn’t kicked in to produce an error; but it lurks, like a smoking bomb. Worse, maybe you find that the bug has started its evil work of corrupting the data, but in ways that nobody has, so far detected. You investigate, and find the damage. You are somehow going to have to repair it. Yes, it still very occasionally happens to me. It is not a nice feeling, and I do anything I can to prevent it happening. That’s why I’m interested in SQL code smells. SQL Code Smells aren’t necessarily bad practices, but just show you where to focus your attention when checking an application. Sometimes with databases the bugs can be subtle. SQL is rather like HTML: the language does its best to try to carry out your wishes, rather than to be picky about your bugs. Most of the time, this is a great benefit, but not always. One particular place where this can be detrimental is where you have implicit conversion between different data types. Most of the time it is completely harmless but we’re  concerned about the occasional time it isn’t. Let’s give an example: String truncation. Let’s give another even more frightening one, rounding errors on assignment to a number of different precision. Each requires a blog-post to explain in detail and I’m not now going to try. Just remember that it is not always a good idea to assign data to variables, parameters or even columns when they aren’t the same datatype, especially if you are relying on implicit conversion to work its magic.For details of the problem and the consequences, see here:  SR0014: Data loss might occur when casting from {Type1} to {Type2} . For any experienced Database Developer, this is a more frightening read than a Vampire Story. This is why one of the SQL Code Smells that makes me edgy, in my own or other peoples’ code, is to see parameters, variables and columns that have the same names and different datatypes. Whereas quite a lot of this is perfectly normal and natural, you need to check in case one of two things have gone wrong. Either sloppy naming, or mixed datatypes. Sure it is hard to remember whether you decided that the length of a log entry was 80 or 100 characters long, or the precision of a number. That is why a little check like this I’m going to show you is excellent for tidying up your code before you check it back into source Control! 1/ Checking Parameters only If you were just going to check parameters, you might just do this. It simply groups all the parameters, either input or output, of all the routines (e.g. stored procedures or functions) by their name and checks to see, in the HAVING clause, whether their data types are all the same. If not, it lists all the examples and their origin (the routine) Even this little check can occasionally be scarily revealing. ;WITH userParameter AS  ( SELECT   c.NAME AS ParameterName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  t.name + ' '     + CASE     --we may have to put in the length            WHEN t.name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.max_length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.name IN ('nchar', 'nvarchar')                      THEN c.max_length / 2 ELSE c.max_length                    END)                END + ')'         WHEN t.name IN ('decimal', 'numeric')             THEN '(' + CONVERT(VARCHAR(4), c.precision)                   + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = c.XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType]  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'   AND parameter_id>0)SELECT CONVERT(CHAR(80),objectName+'.'+ParameterName),DataType FROM UserParameterWHERE ParameterName IN   (SELECT ParameterName FROM UserParameter    GROUP BY ParameterName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY ParameterName   so, in a very small example here, we have a @ClosingDelimiter variable that is only CHAR(1) when, by the looks of it, it should be up to ten characters long, or even worse, a function that should be a char(1) and seems to let in a string of ten characters. Worth investigating. Then we have a @Comment variable that can't decide whether it is a VARCHAR(2000) or a VARCHAR(MAX) 2/ Columns and Parameters Actually, once we’ve cleared up the mess we’ve made of our parameter-naming in the database we’re inspecting, we’re going to be more interested in listing both columns and parameters. We can do this by modifying the routine to list columns as well as parameters. Because of the slight complexity of creating the string version of the datatypes, we will create a fake table of both columns and parameters so that they can both be processed the same way. After all, we want the datatypes to match Unfortunately, parameters do not expose all the attributes we are interested in, such as whether they are nullable (oh yes, subtle bugs happen if this isn’t consistent for a datatype). We’ll have to leave them out for this check. Voila! A slight modification of the first routine ;WITH userObject AS  ( SELECT   Name AS DataName,--the actual name of the parameter or column ('@' removed)  --and the qualified object name of the routine  OBJECT_SCHEMA_NAME(ObjectID) + '.' + OBJECT_NAME(ObjectID) AS ObjectName,  --now the harder bit: the definition of the datatype.  TypeName + ' '     + CASE     --we may have to put in the length. e.g. CHAR (10)           WHEN TypeName IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN MaxLength = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN TypeName IN ('nchar', 'nvarchar')                      THEN MaxLength / 2 ELSE MaxLength                    END)                END + ')'         WHEN TypeName IN ('decimal', 'numeric')--a BCD number!             THEN '(' + CONVERT(VARCHAR(4), Precision)                   + ',' + CONVERT(VARCHAR(4), Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0 --tush tush. XML         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT TOP 1 QUOTENAME(ss.name) + '.' + QUOTENAME(sc.Name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType],       DataObjectType  FROM   (Select t.name AS TypeName, REPLACE(c.name,'@','') AS Name,          c.max_length AS MaxLength, c.precision AS [Precision],           c.scale AS [Scale], c.[Object_id] AS ObjectID, XML_collection_ID,          is_XML_Document,'P' AS DataobjectType  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  AND parameter_id>0  UNION all  Select t.name AS TypeName, c.name AS Name, c.max_length AS MaxLength,          c.precision AS [Precision], c.scale AS [Scale],          c.[Object_id] AS ObjectID, XML_collection_ID,is_XML_Document,          'C' AS DataobjectType            FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID   WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'  )f)SELECT CONVERT(CHAR(80),objectName+'.'   + CASE WHEN DataobjectType ='P' THEN '@' ELSE '' END + DataName),DataType FROM UserObjectWHERE DataName IN   (SELECT DataName FROM UserObject   GROUP BY DataName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY DataName     Hmm. I can tell you I found quite a few minor issues with the various tabases I tested this on, and found some potential bugs that really leap out at you from the results. Here is the start of the result for AdventureWorks. Yes, AccountNumber is, for some reason, a Varchar(10) in the Customer table. Hmm. odd. Why is a city fifty characters long in that view?  The idea of the description of a colour being 256 characters long seems over-ambitious. Go down the list and you'll spot other mistakes. There are no bugs, but just mess. We started out with a listing to examine parameters, then we mixed parameters and columns. Our last listing is for a slightly more in-depth look at table columns. You’ll notice that we’ve delibarately removed the indication of whether a column is persisted, or is an identity column because that gives us false positives for our code smells. If you just want to browse your metadata for other reasons (and it can quite help in some circumstances) then uncomment them! ;WITH userColumns AS  ( SELECT   c.NAME AS columnName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  REPLACE(t.name + ' '   + CASE WHEN is_computed = 1 THEN ' AS ' + --do DDL for a computed column          (SELECT definition FROM sys.computed_columns cc           WHERE cc.object_id = c.object_id AND cc.column_ID = c.column_ID)     --we may have to put in the length            WHEN t.Name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.Max_Length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.Name IN ('nchar', 'nvarchar')                      THEN c.Max_Length / 2 ELSE c.Max_Length                    END)                END + ')'       WHEN t.name IN ('decimal', 'numeric')       THEN '(' + CONVERT(VARCHAR(4), c.precision) + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'       ELSE ''      END + CASE WHEN c.is_rowguidcol = 1          THEN ' ROWGUIDCOL'          ELSE ''         END + CASE WHEN XML_collection_ID <> 0            THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                THEN 'DOCUMENT '                ELSE 'CONTENT '               END + COALESCE((SELECT                QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM                sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE                sc.xml_collection_ID = c.XML_collection_ID),                'NULL') + ')'            ELSE ''           END + CASE WHEN is_identity = 1             THEN CASE WHEN OBJECTPROPERTY(object_id,                'IsUserTable') = 1 AND COLUMNPROPERTY(object_id,                c.name,                'IsIDNotForRepl') = 0 AND OBJECTPROPERTY(object_id,                'IsMSShipped') = 0                THEN ''                ELSE ' NOT FOR REPLICATION '               END             ELSE ''            END + CASE WHEN c.is_nullable = 0               THEN ' NOT NULL'               ELSE ' NULL'              END + CASE                WHEN c.default_object_id <> 0                THEN ' DEFAULT ' + object_Definition(c.default_object_id)                ELSE ''               END + CASE                WHEN c.collation_name IS NULL                THEN ''                WHEN c.collation_name <> (SELECT                collation_name                FROM                sys.databases                WHERE                name = DB_NAME()) COLLATE Latin1_General_CI_AS                THEN COALESCE(' COLLATE ' + c.collation_name,                '')                ELSE ''                END,'  ',' ') AS [DataType]FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys')SELECT CONVERT(CHAR(80),objectName+'.'+columnName),DataType FROM UserColumnsWHERE columnName IN (SELECT columnName FROM UserColumns  GROUP BY columnName  HAVING MIN(Datatype)<>MAX(DataType))ORDER BY columnName If you take a look down the results against Adventureworks, you'll see once again that there are things to investigate, mostly, in the illustration, discrepancies between null and non-null datatypes So I here you ask, what about temporary variables within routines? If ever there was a source of elusive bugs, you'll find it there. Sadly, these temporary variables are not stored in the metadata so we'll have to find a more subtle way of flushing these out, and that will, I'm afraid, have to wait!

    Read the article

  • Have you used the ExecutionValue and ExecValueVariable properties?

    The ExecutionValue execution value property and it’s friend ExecValueVariable are a much undervalued feature of SSIS, and many people I talk to are not even aware of their existence, so I thought I’d try and raise their profile a bit. The ExecutionValue property is defined on the base object Task, so all tasks have it available, but it is up to the task developer to do something useful with it. The basic idea behind it is that it allows the task to return something useful and interesting about what it has performed, in addition to the standard success or failure result. The best example perhaps is the Execute SQL Task which uses the ExecutionValue property to return the number of rows affected by the SQL statement(s). This is a very useful feature, something people often want to capture into a variable, and start using the result set options to do. Unfortunately we cannot read the value of a task property at runtime from within a SSIS package, so the ExecutionValue property on its own is a bit of a let down, but enter the ExecValueVariable and we have the perfect marriage. The ExecValueVariable is another property exposed through the task (TaskHost), which lets us select a SSIS package variable. What happens now is that when the task sets the ExecutionValue, the interesting value is copied into the variable we set on the ExecValueVariable property, and a variable is something we can access and do something with. So put simply if the ExecutionValue property value is of interest, make sure you create yourself a package variable and set the name as the ExecValueVariable. Have  look at the 3 step guide below: 1 Configure your task as normal, for example the Execute SQL Task, which here calls a stored procedure to do some updates. 2 Create variable of a suitable type to match the ExecutionValue, an integer is used to match the result we want to capture, the number of rows. 3 Set the ExecValueVariable for the task, just select the variable we created in step 2. You need to do this in Properties grid for the task (Short-cut key, select the task and press F4) Now when we execute the sample task above, our variable UpdateQueueRowCount will get the number of rows we updated in our Execute SQL Task. I’ve tried to collate a list of tasks that return something useful via the ExecutionValue and ExecValueVariable mechanism, but the documentation isn’t always great. Task ExecutionValue Description Execute SQL Task Returns the number of rows affected by the SQL statement or statements. File System Task Returns the number of successful operations performed by the task. File Watcher Task Returns the full path of the file found Transfer Error Messages Task Returns the number of error messages that have been transferred Transfer Jobs Task Returns the number of jobs that are transferred Transfer Logins Task Returns the number of logins transferred Transfer Master Stored Procedures Task Returns the number of stored procedures transferred Transfer SQL Server Objects Task Returns the number of objects transferred WMI Data Reader Task Returns an object that contains the results of the task. Not exactly clear, but I assume it depends on the WMI query used.

    Read the article

  • Link instead of Attaching

    - by Daniel Moth
    With email storage not being an issue in many companies (I think I currently have 25GB of storage on my email account, I don’t even think about storage), this encourages bad behaviors such as liberally attaching office documents to emails instead of sharing a link to the document in SharePoint or SkyDrive or some file share etc. Attaching a file admittedly has its usage scenarios too, but it should not be the default. I thought I'd list the reasons why sharing a link can be better than attaching files directly. In no particular order: Better Review. It allows multiple recipients to review the file and their comments are aggregated into a single document. The alternative is everyone having to detach the document, add their comments, then send back to you, and then you have to collate. Wirth the alternative, you also potentially miss out on recipients reading comments from other recipients. Always up to date. The attachment becomes a fork instead of an always up to date document. For example, you send the email on Thursday, I only open it on Tuesday: between those days you could have made updates that now I am missing because you decided to share a link instead of an attachment. Better bookmarking. When I need to find that document you shared, you are forcing me to search through my email (I may not even be running outlook), instead of opening the link which I have bookmarked in my browser or my collection of links in my OneNote or from the recent/pinned links of the office app on my task bar, etc. Can control access. If someone accidentally or naively forwards your link to someone outside your group/org who you’d prefer not to have access to it, the location of the document can be protected with specific access control. Can add more recipients. If someone adds people to the email thread in outlook, your attachment doesn't get re-attached - instead, the person added is left without the attachment unless someone remembers to re-attach it. If it was a link, they are immediately caught up without further actions. Enable Discovery. If you put it on a share, I may be able to discover other cool stuff that lives alongside that document. Save on storage. So this doesn't apply to me given my opening statement, but if in your company you do have such limitations, attaching files eats up storage on all recipients accounts and will also get "lost" when those people archive email (and lose completely at some point if they follow the company retention policy). Like I said, attachments do have their place, but they should be an explicit choice for explicit reasons rather than the default. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • --log-slave-updates is OFF but updates received from master are still logged to slave binary log?

    - by quanta
    MySQL version 5.5.14 According to the document, by the default, slave does not log to its binary log any updates that are received from a master server. Here are my config. on the slave: # egrep 'bin|slave' /etc/my.cnf relay-log=mysqld-relay-bin log-bin = /var/log/mysql/mysql-bin binlog-format=MIXED sync_binlog = 1 log-bin-trust-function-creators = 1 mysql> show global variables like 'log_slave%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | log_slave_updates | OFF | +-------------------+-------+ 1 row in set (0.01 sec) mysql> select @@log_slave_updates; +---------------------+ | @@log_slave_updates | +---------------------+ | 0 | +---------------------+ 1 row in set (0.00 sec) but slave still logs the updates that are received from a master to its binary logs, let's see the file size: -rw-rw---- 1 mysql mysql 37M Apr 1 01:00 /var/log/mysql/mysql-bin.001256 -rw-rw---- 1 mysql mysql 25M Apr 2 01:00 /var/log/mysql/mysql-bin.001257 -rw-rw---- 1 mysql mysql 46M Apr 3 01:00 /var/log/mysql/mysql-bin.001258 -rw-rw---- 1 mysql mysql 115M Apr 4 01:00 /var/log/mysql/mysql-bin.001259 -rw-rw---- 1 mysql mysql 105M Apr 4 18:54 /var/log/mysql/mysql-bin.001260 and the sample query when reading these binary files with mysqlbinlog utility: #120404 19:08:57 server id 3 end_log_pos 110324763 Query thread_id=382435 exec_time=0 error_code=0 SET TIMESTAMP=1333541337/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'118212' COLLATE 'utf8_general_ci')) /*!*/; # at 110324763 Did I miss something?

    Read the article

  • What user information is exposed via a browser?

    - by ipso
    Is there a function or website that can collect and display ALL of the user information that can be obtained via a browser? Background: This of course does not account for the significant cross-reference abilities of large corporations to collate multiple sources and signals from users across various properties, but it's a first step. Ghostery is just a great idea; to show people all of the surreptitious scripts that run on any given website. But what information is available – what is the total set of values stored – that those scripts can collect from? If you login to a search engine and stay logged in but leave their tab, is that company still collecting your webpage viewing and activity from other tabs? Can past or future inputs to pages be captured – say comments on another website? What types of activities are stored as variables in the browser app that can be collected? This is surely a highly complex question, given to countless user scenarios – but my whole point is to be able to cut through all that – and just show the total set of data available at any given point in time. Then you can A/B test and see what is available with in a fresh session with one tab open vs. the same webpage but with 12 tabs open, and a full day of history to boot. (Latest Firefox & Chrome – on Win7, Win8 or Mint13 – although I'd like to think that won't make too much of a difference. Make assumptions. Simple is better.)

    Read the article

  • How to get contacts in order of their upcoming birthdays?

    - by Pentium10
    I have code to read contact details and to read birthdays. But how do I get a list of contacts in order of their upcoming birthday? For a single contact identified by id, I get details and birthday like this: Cursor c = null; try { Uri uri = ContentUris.withAppendedId( ContactsContract.Contacts.CONTENT_URI, id); c = ctx.getContentResolver().query(uri, null, null, null, null); if (c != null) { if (c.moveToFirst()) { DatabaseUtils.cursorRowToContentValues(c, data); } } c.close(); // read birthday c = ctx.getContentResolver() .query( Data.CONTENT_URI, new String[] { Event.DATA }, Data.CONTACT_ID + "=" + id + " AND " + Data.MIMETYPE + "= '" + Event.CONTENT_ITEM_TYPE + "' AND " + Event.TYPE + "=" + Event.TYPE_BIRTHDAY, null, Data.DISPLAY_NAME); if (c != null) { try { if (c.moveToFirst()) { this.setBirthday(c.getString(0)); } } finally { c.close(); } } return super.load(id); } catch (Exception e) { Log.v(TAG(), e.getMessage(), e); e.printStackTrace(); return false; } finally { if (c != null) c.close(); } and the code to read all contacts is: public Cursor getList() { // Get the base URI for the People table in the Contacts content // provider. Uri contacts = ContactsContract.Contacts.CONTENT_URI; // Make the query. ContentResolver cr = ctx.getContentResolver(); // Form an array specifying which columns to return. String[] projection = new String[] { ContactsContract.Contacts._ID, ContactsContract.Contacts.DISPLAY_NAME }; Cursor managedCursor = cr.query(contacts, projection, null, null, ContactsContract.Contacts.DISPLAY_NAME + " COLLATE LOCALIZED ASC"); return managedCursor; }

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >