Search Results

Search found 35003 results on 1401 pages for 'table variable'.

Page 239/1401 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • Can it be a good idea to create a new table for each client of a webapp?

    - by Will
    This is semi-hypothetical, and as I've no experience in dealing with massive database tables, I have no idea if this is horrible for some reason. On to the situation: Imagine a web based application - lets say accounting software - which has 20,000 clients and each client has 1000+ entries in a table. That's 20 million rows which I know can certainly slow down complex queries. In a case like this, does it make more sense to create a new table in the database for each client? How do databases react to having 20k (or more!) tables?

    Read the article

  • Microsoft fait un pas de plus vers la démocratisation de Surface, en dévoilant une nouvelle mouture de sa table tactile

    Microsoft fait un pas de plus vers la démocratisation de Surface, en dévoilant une nouvelle mouture de sa table tactile Mise à jour du 06.01.2011 par Katleen Souvenez-vous de la Surface, cette table basse tactile développée par Microsoft que nous vous présentions en novembre 2009. L'objet était déjà très intéressant, mais connaissait quelques lenteurs et était plutôt massif (ceux qui l'ont déjà déplacé s'en souviennent assurément). En plus de cela, le joujou était extrêmement coûteux, de quoi vous faire dire que son acquisition, ça ne sera même pas dans dix ou vingt ans (sans les intérêts). Microsoft nourrissait déjà à l'époque l'espoir de voir un jour sa technologie plus accessible. Et la firme semble emprunter ce chemin.

    Read the article

  • Variable-step update() in game loop is falling behind, how can I get around this?

    - by ThatsGobbles
    I'm working on a minimal game engine for my next game. I'm using the delta update method like shown: void update(double delta) { // Update code that uses `delta` goes here } I have a deep hierarchy of updatable objects, with a root updatable that contains several updatables, each of which contains more updatables, etc. Normally I'd just iterate through each of the root's children and update each one, which would then do the same for its children, and so on. However, passing a fixed value of delta to the root means that by the time the leaf updatables are reached, it's been longer since delta seconds that have elapsed. This is causing noticable desyncing in my game, and time synchronization is very important in my case (I'm working on a rhythm game). Any ideas on how I should tackle this? I've considered using StopWatches and a global readable timer, but any advice would be helpful. I'm also open to moving to fixed timesteps as opposed to variable.

    Read the article

  • Variable-step update() in game loop is falling behind, how can I get around this?

    - by ThatsGobbles
    I'm working on a minimal game engine for my next game. I'm using the delta update method like shown: void update(double delta) { // Update code that uses `delta` goes here } I have a deep hierarchy of updatable objects, with a root updatable that contains several updatables, each of which contains more updatables, etc. Normally I'd just iterate through each of the root's children and update each one, which would then do the same for its children, and so on. However, passing a fixed value of delta to the root means that by the time the leaf updatables are reached, it's been longer since delta seconds that have elapsed. This is causing noticable desyncing in my game, and time synchronization is very important in my case (I'm working on a rhythm game). Any ideas on how I should tackle this? I've considered using StopWatches and a global readable timer, but any advice would be helpful. I'm also open to moving to fixed timesteps as opposed to variable.

    Read the article

  • Trying to import SQL file in a xampp server returns error

    - by Victor_J_Martin
    I have done a ER diagram in Mysql Workbench, and I am trying load in my server with phpMyAdmin, but it returns me the next error: Error SQL Query: -- ----------------------------------------------------- -- Table `BDA`.`UG` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `BDA`.`UG` ( `numero_ug` INT NOT NULL, `nombre` VARCHAR(45) NOT NULL, `segunda_firma_autorizada` VARCHAR(45) NOT NULL, `fecha_creacion` DATE NOT NULL, `nombre_depto` VARCHAR(140) NOT NULL, `dni` INT NOT NULL, `anho_contable` INT NOT NULL, PRIMARY KEY (`numero_ug`), INDEX `nombre_depto_idx` (`nombre_depto` ASC), INDEX `dni_idx` (`dni` ASC), INDEX `anho_contable_idx` (`anho_contable` ASC), CONSTRAINT `nombre_depto` FOREIGN KEY (`nombre_depto`) REFERENCES `BDA`.`Departamento` (`nombre_depto`) ON DELETE NO ACTION ON UPDATE NO ACTION, CONSTRAINT `dni` FOREIGN KEY (`dni`) REFERENCES `BDA`.`Trabajador` (`dni`) ON DELETE NO ACTION ON UPDATE NO ACTION, CONSTRAINT `anho_contable` FOREIGN KEY (`anho_contable`) REFERENCES `BDA`.`Capitulo_Contable` (`anho_contable`) [...] MySQL said: Documentation #1022 - Can't write; duplicate key in table 'ug' I export the result of the diagram from Mysql Workbench to a SQL file, and this file is what I'm trying to upload. This is the file. I can not find the duplicate key. SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0; SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0; SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='TRADITIONAL,ALLOW_INVALID_DATES'; CREATE SCHEMA IF NOT EXISTS `BDA` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci ; USE `BDA` ; -- ----------------------------------------------------- -- Table `BDA`.`Departamento` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `BDA`.`Departamento` ( `nombre_depto` VARCHAR(140) NOT NULL, `area_depto` VARCHAR(140) NOT NULL, PRIMARY KEY (`nombre_depto`)) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `BDA`.`Trabajador` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `BDA`.`Trabajador` ( `dni` INT NOT NULL, `direccion` VARCHAR(140) NOT NULL, `nombre` VARCHAR(45) NOT NULL, `apellidos` VARCHAR(140) NOT NULL, `fecha_nacimiento` DATE NOT NULL, `fecha_contrato` DATE NOT NULL, `titulacion` VARCHAR(140) NULL, `nombre_depto` VARCHAR(45) NOT NULL, PRIMARY KEY (`dni`), INDEX `nombre_depto_idx` (`nombre_depto` ASC), CONSTRAINT `nombre_depto` FOREIGN KEY (`nombre_depto`) REFERENCES `BDA`.`Departamento` (`nombre_depto`) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `BDA`.`Capitulo_Contable` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `BDA`.`Capitulo_Contable` ( `anho_contable` INT NOT NULL, `numero_ug` INT NOT NULL, `debe` DOUBLE NOT NULL, `haber` DOUBLE NOT NULL, PRIMARY KEY (`anho_contable`), INDEX `numero_ug_idx` (`numero_ug` ASC), CONSTRAINT `numero_ug` FOREIGN KEY (`numero_ug`) REFERENCES `BDA`.`UG` (`numero_ug`) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `BDA`.`UG` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `BDA`.`UG` ( `numero_ug` INT NOT NULL, `nombre` VARCHAR(45) NOT NULL, `segunda_firma_autorizada` VARCHAR(45) NOT NULL, `fecha_creacion` DATE NOT NULL, `nombre_depto` VARCHAR(140) NOT NULL, `dni` INT NOT NULL, `anho_contable` INT NOT NULL, PRIMARY KEY (`numero_ug`), INDEX `nombre_depto_idx` (`nombre_depto` ASC), INDEX `dni_idx` (`dni` ASC), INDEX `anho_contable_idx` (`anho_contable` ASC), CONSTRAINT `nombre_depto` FOREIGN KEY (`nombre_depto`) REFERENCES `BDA`.`Departamento` (`nombre_depto`) ON DELETE NO ACTION ON UPDATE NO ACTION, CONSTRAINT `dni` FOREIGN KEY (`dni`) REFERENCES `BDA`.`Trabajador` (`dni`) ON DELETE NO ACTION ON UPDATE NO ACTION, CONSTRAINT `anho_contable` FOREIGN KEY (`anho_contable`) REFERENCES `BDA`.`Capitulo_Contable` (`anho_contable`) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `BDA`.`Cliente` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `BDA`.`Cliente` ( `cif_cliente` INT NOT NULL, `nombre_cliente` VARCHAR(140) NOT NULL, PRIMARY KEY (`cif_cliente`)) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `BDA`.`Ingreso` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `BDA`.`Ingreso` ( `id` INT NOT NULL, `concepto` VARCHAR(45) NOT NULL, `importe` DOUBLE NOT NULL, `fecha` DATE NOT NULL, `cif_cliente` INT NOT NULL, `numero_ug` INT NOT NULL, PRIMARY KEY (`id`), INDEX `cif_cliente_idx` (`cif_cliente` ASC), INDEX `numero_ug_idx` (`numero_ug` ASC), CONSTRAINT `cif_cliente` FOREIGN KEY (`cif_cliente`) REFERENCES `BDA`.`Cliente` (`cif_cliente`) ON DELETE NO ACTION ON UPDATE NO ACTION, CONSTRAINT `numero_ug` FOREIGN KEY (`numero_ug`) REFERENCES `BDA`.`UG` (`numero_ug`) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `BDA`.`Proveedor` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `BDA`.`Proveedor` ( `cif_proveedor` INT NOT NULL, `nombre_proveedor` VARCHAR(140) NOT NULL, PRIMARY KEY (`cif_proveedor`)) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `BDA`.`Gasto` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `BDA`.`Gasto` ( `id` INT NOT NULL, `concepto` VARCHAR(45) NOT NULL, `importe` DOUBLE NOT NULL, `fecha` DATE NOT NULL, `factura` INT NOT NULL, `cif_proveedor` INT NOT NULL, `numero_ug` INT NOT NULL, PRIMARY KEY (`id`), INDEX `cif_proveedor_idx` (`cif_proveedor` ASC), INDEX `numero_ug_idx` (`numero_ug` ASC), CONSTRAINT `cif_proveedor` FOREIGN KEY (`cif_proveedor`) REFERENCES `BDA`.`Proveedor` (`cif_proveedor`) ON DELETE NO ACTION ON UPDATE NO ACTION, CONSTRAINT `numero_ug` FOREIGN KEY (`numero_ug`) REFERENCES `BDA`.`UG` (`numero_ug`) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; SET SQL_MODE=@OLD_SQL_MODE; SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS; SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS; Thanks for your advices.

    Read the article

  • How to recursively delete some xml elements using XSLT

    - by Monomachus
    Hi, So I got this situation which sucks. I have an XML like this <table border="1" cols="200 100pt 200"> <tr> <td>isbn</td> <td>title</td> <td>price</td> </tr> <tr> <td /> <td /> <td> <span type="champsimple" id="9b297fb5-d12b-46b1-8899-487a2df0104e" categorieid="a1c70692-0427-425b-983c-1a08b6585364" champcoderef="01f12b93-b4c5-401b-9da1-c9385d77e43f"> [prénom] </span> <span type="champsimple" id="e103a6a5-d1be-4c34-8a54-d234179fb4ea" categorieid="a1c70692-0427-425b-983c-1a08b6585364" champcoderef="01f12b93-b4c5-401b-9da1-c9385d77e43f">[nom]</span> <span></span> </td> </tr> <tr></tr> <tr> <td></td> <td>Phill It in</td> </tr> <tr> <table id="cas1"> <tr> <td ></td> <td >foo</td> </tr> <tr> <td >bar</td> <td >boo</td> </tr> </table> </tr> <tr> <table id="cas2"> <tr> <td ></td> <td >foo</td> </tr> <tr> <td ></td> <td >boo</td> </tr> </table> </tr> <tr> <table id="cas3"> <tr> <td >bar</td> <td ></td> </tr> <tr> <td >foo</td> <td >boo</td> </tr> </table> </tr> <tr> <table id="cas4"> <tr> <td /> <td /> </tr> <tr> <td>foo</td> <td>boo</td> </tr> </table> </tr> <table id="cas4"> <tr> <td /> <td /> </tr> <tr> <td>foo</td> <td>boo</td> </tr> </table> <tr> <td /> <td /> </tr> </table> Now the question is how would I recursively delete all empty td, tr and table elements? Now I use this XSLT <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output omit-xml-declaration="yes" indent="yes"/> <xsl:strip-space elements="*" /> <xsl:template match="node()|@*"> <xsl:copy> <xsl:apply-templates select="node()|@*"/> </xsl:copy> </xsl:template> <xsl:template match="td[not(node())]" /> <xsl:template match="tr[not(node())]" /> <xsl:template match="table[not(node())]" /> </xsl:stylesheet> But it doesn't do very well. After I delete td, a tr becomes empty but it doesn't handle that. Too bad. See the table element with "cas4". <table border="1" cols="200 100pt 200"> <tr> <td>isbn</td> <td>title</td> <td>price</td> </tr> <tr> <td> <span type="champsimple" id="9b297fb5-d12b-46b1-8899-487a2df0104e" categorieid="a1c70692-0427-425b-983c-1a08b6585364" champcoderef="01f12b93-b4c5-401b-9da1-c9385d77e43f"> [prénom] </span> <span type="champsimple" id="e103a6a5-d1be-4c34-8a54-d234179fb4ea" categorieid="a1c70692-0427-425b-983c-1a08b6585364" champcoderef="01f12b93-b4c5-401b-9da1-c9385d77e43f">[nom]</span> <span /> </td> </tr> <tr> <td>Phill It in</td> </tr> <tr> <table id="cas1"> <tr> <td>foo</td> </tr> <tr> <td>bar</td> <td>boo</td> </tr> </table> </tr> <tr> <table id="cas2"> <tr> <td>foo</td> </tr> <tr> <td>boo</td> </tr> </table> </tr> <tr> <table id="cas3"> <tr> <td>bar</td> </tr> <tr> <td>foo</td> <td>boo</td> </tr> </table> </tr> <tr> <table id="cas4"> <tr /> <tr> <td>foo</td> <td>boo</td> </tr> </table> </tr> <table id="cas4"> <tr /> <tr> <td>foo</td> <td>boo</td> </tr> </table> <tr /> </table> How would you solve this problem?

    Read the article

  • PHP / MYSQL: Database empties when I use a variable in the WHERE condition of the last mysql_query

    - by Christian Cugnet
    <?php require 'connect.php'; $search = $_POST["search"]; These two queries work fine. So I used their format for the one below. $result = mysql_query("SELECT * FROM `subjects` WHERE $search = `student_id`"); $result2 = mysql_query("SELECT * FROM `grades` WHERE $search = `student_id`"); while($row = mysql_fetch_array($result)) { $row2 = mysql_fetch_array($result2); echo"<table border='1'>"; echo "<tr>"; echo "<th>Subjects:</th>"; echo "<th>Current Mark:</th>"; echo "<th>Edit Mark:</th>"; echo"</tr>"; echo"<tr>"; echo "<td>". $row['c1'] ."</td>"; echo "<td>". $row2['m1'] ."</td>"; echo "<td><input type='text' name='m1'></td>"; echo "</tr>"; echo "<tr>"; echo "<td>". $row['c2'] ."</td>"; echo "<td>". $row2['m2'] ."</td>"; echo "<td><input type='text' name='m2'></td>"; echo "</tr>"; echo "<tr>"; echo "<td>". $row['c3'] ."</td>"; echo "<td>". $row2['m3'] ."</td>"; echo "<td><input type='text' name='m3'></td>"; echo "</tr>"; echo "<tr>"; echo "<td>". $row['c4'] ."</td>"; echo "<td>". $row2['m4'] ."</td>"; echo "<td><input type='text' name='m4'></td>"; echo "</tr>"; echo "<tr>"; echo "<td>". $row['c5'] ."</td>"; echo "<td>". $row2['m5'] ."</td>"; echo "<td><input type='text' name='m5'></td>"; echo "</tr>"; echo "<tr>"; echo "<td>". $row['c6'] ."</td>"; echo "<td>". $row2['m6'] ."</td>"; echo "<td><input type='text' name='m6'></td>"; echo "</tr>"; echo "<tr>"; echo "<td>". $row['c7'] ."</td>"; echo "<td>". $row2['m7'] ."</td>"; echo "<td><input type='text' name='m7'></td>"; echo "</tr>"; echo "</table>"; echo "<input type='submit' name='submit' value='Submit'>"; echo "</form>"; } $M1 = $_POST["m1"]; $M2 = $_POST["m2"]; $M3 = $_POST["m3"]; $M4 = $_POST["m4"]; $M5 = $_POST["m5"]; $M6 = $_POST["m6"]; $M7 = $_POST["m7"]; It works if I put numbers e.x. 11111 Otherwise it just enters blank spaces into the table. I've tried '".$search."' I've tried ".$search." mysql_query("UPDATE grades SET m1 = '$M1', m2 = '$M2',m3 = '$M3',m4 = '$M4',m5 = '$M5',m6 = '$M6',m7 = '$M7' WHERE $search = `student_id`"); ?> Table +------------+---+---+---+---+---+---+---+ |student_id|m1|m2|m3|m4|m5|m6|m7| +------------+---+---+---+---+---+---+---+ ===Database d1 == Table structure for table grades |------ |Column|Type|Null|Default |------ |//student_id//|int(5)|No| |m1|text|No| |m2|text|No| |m3|text|No| |m4|text|No| |m5|text|No| |m6|text|No| |m7|text|No| == Dumping data for table grades |11111| | | | | | | |11112|fg|fd|f|f|fd|f|f ===Database d1 == Table structure for table subjects |------ |Column|Type|Null|Default |------ |//student_id//|int(11)|No| |c1|text|No| |c2|text|No| |c3|text|No| |c4|text|No| |c5|text|No| |c6|text|No| |c7|text|No| == Dumping data for table subjects |11111|English|Math|Science|Sport|IT|Art|History |11112|grdgg|vsbvbbb|bdbbrfd|bdbrb|dbrbfbf|fbdfbdbf|dbfbdfb

    Read the article

  • SQL SERVER – Using expressor Composite Types to Enforce Business Rules

    - by pinaldave
    One of the features that distinguish the expressor Data Integration Platform from other products in the data integration space is its concept of composite types, which provide an effective and easily reusable way to clearly define the structure and characteristics of data within your application.  An important feature of the composite type approach is that it allows you to easily adjust the content of a record to its ultimate purpose.  For example, a record used to update a row in a database table is easily defined to include only the minimum set of columns, that is, a value for the key column and values for only those columns that need to be updated. Much like a class in higher level programming languages, you can also use the composite type as a way to enforce business rules onto your data by encapsulating a datum’s name, data type, and constraints (for example, maximum, minimum, or acceptable values) as a single entity, which ensures that your data can not assume an invalid value.  To what extent you use this functionality is a decision you make when designing your application; the expressor design paradigm does not force this approach on you. Let’s take a look at how these features are used.  Suppose you want to create a group of applications that maintain the employee table in your human resources database. Your table might have a structure similar to the HumanResources.Employee table in the AdventureWorks database.  This table includes two columns, EmployeID and rowguid, that are maintained by the relational database management system; you cannot provide values for these columns when inserting new rows into the table. Additionally, there are columns such as VacationHours and SickLeaveHours that you might choose to update for all employees on a monthly basis, which justifies creation of a dedicated application. By creating distinct composite types for the read, insert and update operations against this table, you can more easily manage this table’s content. When developing this application within expressor Studio, your first task is to create a schema artifact for the database table.  This process is completely driven by a wizard, only requiring that you select the desired database schema and table.  The resulting schema artifact defines the mapping of result set records to a record within the expressor data integration application.  The structure of the record within the expressor application is a composite type that is given the default name CompositeType1.  As you can see in the following figure, all columns from the table are included in the result set and mapped to an identically named attribute in the default composite type. If you are developing an application that needs to read this table, perhaps to prepare a year-end report of employees by department, you would probably not be interested in the data in the rowguid and ModifiedDate columns.  A typical approach would be to drop this unwanted data in a downstream operator.  But using an alternative composite type provides a better approach in which the unwanted data never enters your application. While working in expressor  Studio’s schema editor, simply create a second composite type within the same schema artifact, which you could name ReadTable, and remove the attributes corresponding to the unwanted columns. The value of an alternative composite type is even more apparent when you want to insert into or update the table.  In the composite type used to insert rows, remove the attributes corresponding to the EmployeeID primary key and rowguid uniqueidentifier columns since these values are provided by the relational database management system. And to update just the VacationHours and SickLeaveHours columns, use a composite type that includes only the attributes corresponding to the EmployeeID, VacationHours, SickLeaveHours and ModifiedDate columns. By specifying this schema artifact and composite type in a Write Table operator, your upstream application need only deal with the four required attributes and there is no risk of unintentionally overwriting a value in a column that does not need to be updated. Now, what about the option to use the composite type to enforce business rules?  If you review the composition of the default composite type CompositeType1, you will note that the constraints defined for many of the attributes mirror the table column specifications.  For example, the maximum number of characters in the NationaIDNumber, LoginID and Title attributes is equivalent to the maximum width of the target column, and the size of the MaritalStatus and Gender attributes is limited to a single character as required by the table column definition.  If your application code leads to a violation of these constraints, an error will be raised.  The expressor design paradigm then allows you to handle the error in a way suitable for your application.  For example, a string value could be truncated or a numeric value could be rounded. Moreover, you have the option of specifying additional constraints that support business rules unrelated to the table definition. Let’s assume that the only acceptable values for marital status are S, M, and D.  Within the schema editor, double-click on the MaritalStatus attribute to open the Edit Attribute window.  Then click the Allowed Values checkbox and enter the acceptable values into the Constraint Value text box. The schema editor is updated accordingly. There is one more option that the expressor semantic type paradigm supports.  Since the MaritalStatus attribute now clearly specifies how this type of information should be represented (a single character limited to S, M or D), you can convert this attribute definition into a shared type, which will allow you to quickly incorporate this definition into another composite type or into the description of an output record from a transform operator. Again, double-click on the MaritalStatus attribute and in the Edit Attribute window, click Convert, which opens the Share Local Semantic Type window that you use to name this shared type.  There’s no requirement that you give the shared type the same name as the attribute from which it was derived.  You should supply a name that makes it obvious what the shared type represents. In this posting, I’ve overviewed the expressor semantic type paradigm and shown how it can be used to make your application development process more productive.  The beauty of this feature is that you choose when and to what extent you utilize the functionality, but I’m certain that if you opt to follow this approach your efforts will become more efficient and your work will progress more quickly.  As always, I encourage you to download and evaluate expressor Studio for your current and future data integration needs. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SharePoint 2010 Center and Fixed Width of all content on page including the ribbon

    - by Bill Daugherty
    All, I am trying to make the width of the sharepoint 2010 web site to be within a fixed width and centered across the screen. I would like for it to be 800px and centered. When i do this, it seems like it starts to work until the ribbion bar renters. Here is my attempt so far: body.v4/* _lcid="1033" _version="14.0.4536" _LocalBinding */ body,form{ margin:0px; width:800px; text-align:center; vertical-align:middle; } .ms-toolbar{ font-family:verdana; font-size:8pt; text-decoration:none; /* [ReplaceColor(themeColor:"Hyperlink")] */ color:#0072BC; } a.ms-toolbar:hover{ text-decoration:underline; /* [ReplaceColor(themeColor:"Accent1",themeShade:"0.8")] */ color:#005e9a; } .ms-toolbar-togglebutton-on{ /* [ReplaceColor(themeColor:"Accent3-Darker")] */ border:1px solid #2353b2; /* [ReplaceColor(themeColor:"Accent4-Lightest")] */ background-color:#fffacc; } table.ms-toolbar{ height:45px; border:none; /* [RecolorImage(themeColor:"Light2",includeRectangle:{x:0,y:610,width:1,height:42})] */ background:url("/_layouts/images/bgximg.png") repeat-x -0px -610px; /* [ReplaceColor(themeColor:"Light1")] */ background-color:#fff; } table.ms-toolbar{ /* [ReplaceColor(themeColor:"Light2-Lightest")] */ border:1px solid #f1f1f2; } .ms-menutoolbar{ /* [ReplaceColor(themeColor:"Light2-Lightest")] */ border-bottom:1px solid #f1f1f2; /* [ReplaceColor(themeColor:"Light1")] */ background-color:#fff; /* [RecolorImage(themeColor:"Light2",includeRectangle:{x:0,y:610,width:1,height:42})] */ background:url("/_layouts/images/bgximg.png") repeat-x -0px -610px; height:45px; } .ms-menutoolbar td{ padding:0px 0px 0px 4px; margin:0px; border:none; } .ms-menutoolbar td a{ /* [ReplaceColor(themeColor:"Hyperlink")] */ color:#0072bc; font-size:8pt; font-family:verdana; text-decoration:none; } .ms-menutoolbar td a:hover{ /* [ReplaceColor(themeColor:"Hyperlink",themeShade:"0.82")] */ color:#005e9a; text-decoration:none; } .ms-menubuttoninactivehover,.ms-buttoninactivehover{ margin:3px; padding:3px 4px 4px 4px; border:1px solid transparent; background-color:transparent; white-space:nowrap; } .ms-menubuttonactivehover,.ms-buttonactivehover{ margin:3px; padding:3px 4px 4px 4px; /* [RecolorImage(themeColor:"Light1-Darkest",includeRectangle:{x:0,y:431,width:1,height:21})] */ background:url("/_layouts/images/bgximg.png") repeat-x -0px -431px; /* [ReplaceColor(themeColor:"Light1")] */ background-color:#fff; /* [ReplaceColor(themeColor:"Light1-Lighter")] */ border:solid 1px #cccccc; cursor:pointer; } .ms-buttoninactivehover{ white-space:nowrap; } .ms-buttoninactivehover img,.ms-buttonactivehover img{ margin:0px 1px 0px 0px; } td.ms-menutoolbarheader{ font-size:10pt; font-family:verdana; /* [ReplaceColor(themeColor:"Accent3-Medium")] */ color:#204d89; font-weight:bold; line-height:16px; padding-left:7px; padding-right:7px; } .ms-listheaderlabel{ /* [ReplaceColor(themeColor:"Dark2")] */ color:#204d89; } .ms-listheaderlabel,.ms-viewselector,.ms-viewselectortext,.ms-viewselectorhover{ font-size:8pt; font-family:tahoma; } .ms-menutoolbar td td.ms-viewselector,.ms-menutoolbar td td.ms-viewselectorhover,.ms-toolbar td td.ms-viewselector,.ms-toolbar td td.ms-viewselectorhover,td.ms-viewselector{ /* [ReplaceColor(themeColor:"Light1")] */ background-color:#ffffff; /* [ReplaceColor(themeColor:"Dark2-Medium")] */ border:1px solid #D3D6DA; font-weight:bold; padding:0px; } .ms-menutoolbar td td{ border:none; } div.ms-viewselector,div.ms-viewselectorhover{ padding:2px 4px 2px 4px; cursor:pointer; } div.ms-viewselector a,div.ms-viewselectorhover a.ms-menu-a span{ /* [ReplaceColor(themeColor:"Dark1")] */ color:#000000; } .ms-viewselector-arrow{ vertical-align:middle; } .ms-menutoolbar td td.ms-viewselectorhover,.ms-toolbar td td.ms-viewselectorhover{ /* [RecolorImage(themeColor:"Accent1",method:"Tinting",includeRectangle:{x:0,y:654,width:1,height:18})] */ background:url("/_layouts/images/bgximg.png") repeat-x -0px -654px; /* [ReplaceColor(themeColor:"Accent1-Lighter")] */ border-color:#91cdf2; /* [ReplaceColor(themeColor:"Accent1",themeTint:"0.35")] */ background-color:#ccebff; } .ms-bottompaging{ /* [ReplaceColor(themeColor:"Accent3-Lightest")] */ background:#ebf3ff; } .ms-bottompagingline1{ height:3px; /* [ReplaceColor(themeColor:"Light1")] */ background-color:#ffffff; } .ms-bottompagingline2,.ms-bottompagingline3{ height:1px; /* [ReplaceColor(themeColor:"Light1")] */ background-color:#ffffff; } .ms-bottompaging .ms-vb{ /* [ReplaceColor(themeColor:"Light1")] */ background-color:#ffffff; } .ms-bottompagingline2 img,.ms-bottompagingline3 img,.ms-partline img{ display:none; } .ms-paging{ padding-left:11px; padding-right:11px; padding-bottom:4px; font-family:tahoma,sans-serif; font-size:8pt; font-weight:normal; /* [ReplaceColor(themeColor:"Accent3-Darker")] */ color:#204d89; } .ms-bottompaging .ms-paging{ /* [ReplaceColor(themeColor:"Dark1-Medium")] */ color:#4c4c4c; } .ms-menutoolbar .ms-splitbuttondropdown{ padding:3px 2px 0px 2px; } .ms-menutoolbar .ms-splitbuttontext{ padding:0px 7px 1px 7px; } .ms-splitbutton{ margin:0px 2px; } .ms-splitbuttonhover{ margin:0px 2px; /* [RecolorImage(themeColor:"Accent6-Darker",method:"Tinting",includeRectangle:{x:0,y:431,width:1,height:21})] */ background:url("/_layouts/images/bgximg.png") repeat-x -0px -431px; border-collapse:collapse; height:22px; background-color:#fff; } .ms-splitbuttonhover .ms-splitbuttondropdown{ padding:3px 1px 0px 2px; } .ms-splitbuttonhover .ms-splitbuttontext{ padding:0px 6px 0px 6px; } .ms-splitbuttonhover .ms-splitbuttondropdown,.ms-splitbuttonhover .ms-splitbuttontext{ border:solid 1px #cccccc; cursor:pointer; } .ms-propertysheet { font-size:1em; } .ms-propertysheet th.ms-gridT1 { text-align:left; /* [ReplaceColor(themeColor:"Dark1")] */ color:#000000; width:190px; } .ms-viewselect a:link{ font-size:8pt; font-family:Verdana,sans-serif; /* [ReplaceColor(themeColor:"Accent3")] */ color:#003399; } select{ font-size:8pt; font-family:Verdana,sans-serif; } hr{ /* [ReplaceColor(themeColor:"Accent3")] */ color:#003399; height:2px; } .ms-input{ font-size:8pt; font-family:Verdana,sans-serif; } .ms-treeviewouter{ margin-top:5px; } .ms-quicklaunch table td{ /* [ReplaceColor(themeColor:"Accent3-Lighter")] */ border-top:1px solid #add1ff; } .ms-quicklaunch .ms-treeviewouter table td{ border-top:none; } .ms-quicklaunch table.ms-navheader td,.ms-quicklaunch span.ms-navheader{ padding:1px 4px 4px 4px; } div.ms-treeviewouter > div > div{ border:none; } .ms-quicklaunch span.ms-navheader{ /* [ReplaceColor(themeColor:"Accent3-Lightest")] */ background-color:#d6e8ff; /* [ReplaceColor(themeColor:"Accent3-Lighter")] */ border-top:1px solid #add1ff; /* [ReplaceColor(themeColor:"Accent3-Lightest")] */ border-left:solid 1px #f2f8ff; /* [ReplaceColor(themeColor:"Accent3-Lighter")] */ border-bottom:1px solid #add1ff; padding:1px 6px 3px 6px; } .ms-quicklaunch table.ms-navsubmenu2 td{ border:none; } .ms-quicklaunch table.ms-selectednavheader td{ width:100%; /* [ReplaceColor(themeColor:"Accent6-Lightest")] */ background-color:#fff699; } .ms-quicklaunch table.ms-selectednavheader{ border:none; } .ms-quicklaunch span{ display:block; } .ms-quicklaunch div.ms-navsubmenu1 br{ display:none; } .ms-quicklaunch table.ms-selectednav{ /* [ReplaceColor(themeColor:"Accent6-Darker")] */ border:solid 1px #d2b47a; /* [RecolorImage(themeColor:"Accent1",method:"Tinting")] */ background-image:url("/_layouts/images/selectednav.gif"); background-repeat:repeat-x; /* [ReplaceColor(themeColor:"Accent6-Lightest")] */ background-color:#ffe6a0; margin:2px; margin-bottom:0; width:97%; } .ms-quicklaunch table.ms-selectednav td{ background:transparent url("/_layouts/images/selectednavbullet.gif"); background-repeat:no-repeat; background-position:left top; /* [ReplaceColor(themeColor:"Light1")] */ border:solid 1px #ffffff; padding:0px 4px 1px 12px; margin:0px; } table.ms-selectednav td a.ms-selectednav{ background:none; /* [ReplaceColor(themeColor:"Dark1")] */ color:#000000; } .ms-quicklaunch table.ms-selectednavheader td{ width:100%; /* [ReplaceColor(themeColor:"Accent6-Lighter")] */ background-color:#ffe6a0; /* [RecolorImage(themeColor:"Accent1",method:"Tinting")] */ background-image:url("/_layouts/images/selectednav.gif"); background-repeat:repeat-x; padding-top:2px; padding-bottom:2px; /* [ReplaceColor(themeColor:"Light1")] */ border-top:solid 1px #ffffff; /* [ReplaceColor(themeColor:"Light1")] */ border-left:solid 1px #ffffff; padding:1px 6px 3px 6px; } .ms-selectednavheader a{ font-weight:bold; /* [ReplaceColor(themeColor:"Dark1")] */ color:#000000; text-decoration:none; } .ms-selectednavheader a:hover{ /* [ReplaceColor(themeColor:"Dark1")] */ color:#000000; text-decoration:underline; } table.ms-navitem td,span.ms-navitem{ background-image:url("/_layouts/images/navBullet.gif"); background-repeat:no-repeat; background-position:left top; padding:3px 6px 4px 16px; font-family:tahoma; } .ms-navsubmenu1{ width:100%; border-collapse:collapse; /* [ReplaceColor(themeColor:"Light1-Lightest")] */ background-color:#f2f8ff; } .ms-navsubmenu2{ width:100%; /* [ReplaceColor(themeColor:"Light1-Lightest")] */ background-color:#f2f8ff; margin-bottom:6px; } table.ms-navselected{ padding:2px; } table.ms-navselected,span.ms-navselected{ /* [RecolorImage(themeColor:"Accent6",method:"Tinting")] */ background-image:url("/_layouts/images/SELECTEDNAV.GIF"); /* [ReplaceColor(themeColor:"Accent6-Lighter")] */ background-color:#ffe6a0; background-repeat:repeat-x; } table.ms-navselected td{ background-image:url("/_layouts/images/navBullet.gif"); background-repeat:no-repeat; background-position:top left; padding:3px 6px 4px 17px; } table.ms-navheader td{ background-image:none; } .ms-navheader a{ font-weight:bold; /* [ReplaceColor(themeColor:"Accent3")] */ color:#003399; text-decoration:none; } .ms-navheader a:hover{ /* [ReplaceColor(themeColor:"Dark1")] */ color:#000000; text-decoration:underline; } .ms-navitem a{ /* [ReplaceColor(themeColor:"Dark2")] */ color:#3b4f65 !important; text-decoration:none; display:inline-block; } .ms-navitem a:hover{ /* [ReplaceColor(themeColor:"Accent1")] */ color:#44aff6 !important; text-decoration:underline !important; } .ms-quicklaunchouter{ border:none; margin-bottom:5px; } .ms-quicklaunchouter{ margin:0px 1px 2px 1px; } .ms-treeviewouter a.ms-navitem{ padding:4px 4px 5px; margin-left:4px; border-color:transparent; border-width:1px; border-style:solid !important; } .ms-tvselected a.ms-navitem{ /* [RecolorImage(themeColor:"Light1")] */ background:url("/_layouts/images/selbg.png") repeat-x left top; /* [ReplaceColor(themeColor:"Accent1",themeTint:"0.15")] */ background-color:#ccebff; /* [ReplaceColor(themeColor:"Accent1-Lighter")] */ border-color:#91cdf2; /* [ReplaceColor(themeColor:"Accent1-Lightest")] */ border-top-color:#c6e5f8; border-width:1px; border-style:solid !important; /* [ReplaceColor(themeColor:"Dark2")] */ color:#003759 !important; display:inline-block; } .ms-tvselected a:hover{ /* [ReplaceColor(themeColor:"Dark2")] */ color:#003759 !important; } table.ms-recyclebin td{ /* [ReplaceColor(themeColor:"Light1-Lightest")] */ background-color:#f2f8ff; width:100%; /* [ReplaceColor(themeColor:"Light1")] */ border-top:solid 1px #ffffff; /* [ReplaceColor(themeColor:"Light1")] */ border-left:solid 1px #ffffff; padding:3px 5px 7px 3px; } table.ms-recyclebin td a{ font-weight:bold; /* [ReplaceColor(themeColor:"Accent5-Darker")] */ color:#008800; text-decoration:none; } table.ms-recyclebin td a:hover{ /* [ReplaceColor(themeColor:"Dark1")] */ color:#000000; text-decoration:underline; } .ms-quickLaunch{ padding-top:5px; } .ms-quickLaunch h3{ font-size:1em; font-weight:normal; /* [ReplaceColor(themeColor:"Dark2")] */ color:#929fad; margin:0px 0px 6px 10px; } .ms-quicklaunchheader{ padding:2px 6px 4px 10px; font-weight:bold; /* [ReplaceColor(themeColor:"Light1-Lighter")] */ color:#676767; background-image:url("/_layouts/images/quickLaunchHeader.gif"); background-repeat:repeat-x; /* [ReplaceColor(themeColor:"Accent3-Lightest")] */ background-color:#d6e8ff; /* [ReplaceColor(themeColor:"Light1-Lightest")] */ border-left:solid 1px #f2f8ff; margin-left:-7px; font-size:inherit; } .ms-quicklaunchheader a,.ms-unselectednav a{ /* [ReplaceColor(themeColor:"Dark1-Lighter")] */ color:#676767 !important; text-decoration:none; } .ms-quicklaunchheader a:hover{ /* [ReplaceColor(themeColor:"Dark1")] */ color:#000000 !important; text-decoration:underline; } .ms-navline{ /* [ReplaceColor(themeColor:"Light1-Darker")] */ border-bottom:1px solid #adadad; } .ms-navwatermark{ /* [ReplaceColor(themeColor:"Accent6-Lighter")] */ color:#ffdf88; } .ms-selectednav{ border:1px solid #2353b2; /* [ReplaceColor(themeColor:"Accent6-Lightest")] */ background:#fff699; padding-top:1px; padding-bottom:2px; } .ms-unselectednav{ /* [ReplaceColor(themeColor:"Accent3-Medium")] */ border:1px solid #83b0ec; padding-top:1px; padding-bottom:2px; } .ms-verticaldots{ /* [ReplaceColor(themeColor:"Accent3-Medium")] */ border-right:1px solid #83b0ec; border-left:none; } .ms-nav{ /* [ReplaceColor(themeColor:"Accent3-Medium")] */ background-color:#83b0ec; font-family:tahoma; } .ms-globalTitleArea{ text-align:right; background-image:url("/_layouts/images/siteTitleBKGD.gif"); background-position:right top; background-repeat:repeat-y; padding-left:5px; padding-right:0px; padding-top:1px; } .ms-titlearea{ /* [ReplaceColor(themeColor:"Dark1-Lighter")] */ color:#666666; font-family:tahoma; font-size:8pt; letter-spacing:.1em; } .ms-titlearea a { /* [ReplaceColor(themeColor:"Accent3-Darker")] */ color:#3966bf; text-decoration:none; } .ms-titlearea a:hover { /* [ReplaceColor(themeColor:"Dark1")] */ color:#000000; text-decoration:underline; } .ms-titlearealeft { /* [ReplaceColor(themeColor:"Accent3-Lightest")] */ background-color:#d6e8ff; } TD.ms-titleareaframe,Div.ms-titleareaframe,.ms-pagetitleareaframe{ background:url("/_layouts/images/bgximg.png") repeat-x -0px -461px; /* [ReplaceColor(themeColor:"Accent3-Lightest")] */ background-color:#d6e8ff; text-align:left; } div.ms-titleareaframe{ height:100%; } .ms-pagetitleareaframe table{ background-image:url("/_layouts/images/topshape.jpg"); background-repeat:no-repeat; background-position:332px 4px; height:54px; } .ms-titlearealine{ /* [ReplaceColor(themeColor:"Accent3-Medium")] */ background-color:#83b0ec; } .ms-titleareaframe table td.ms-titlearea,.ms-areaseparator table td.ms-titlearea,.ms-pagetitleareaframe table td.ms-titlearea{ padding:7px 0px 1px 0px; } .ms-sitemapdirectional,.ms-sitemapdirectional a{ unicode-bidi:embed; } .ms-areaseparatorcorner{ background-image:url("/_layouts/images/framecornergrad.gif"); background-position:left top; background-repeat:repeat-y; height:8px; /* [ReplaceColor(themeColor:"Accent5-Medium")] */ border-right:1px solid #6f9dd9; } td.ms-areaseparatorleft{ background:#d6e8ff url("/_layouts/images/bgximg.png") repeat-x -0px -461px; /* [ReplaceColor(themeColor:"Accent5-Medium")] */ border-right:1px solid #6f9dd9; height:100%; } div.ms-areaseparatorleft{ background-repeat:no-repeat; background-position:-143px 0px; /* [ReplaceColor(themeColor:"Accent5-Medium")] */ border-right:1px solid #6f9dd9; height:100%; } div.ms-areaseparatorright{ /* [ReplaceColor(themeColor:"Accent5-Medium")] */ border-left:1px solid #6f9dd9; padding-right:2px; height:100%; } .ms-titlearearight .ms-areaseparatorright{ background:#d6e8ff url("/_layouts/images/bgximg.png") repeat-x -0px -461px; /* [ReplaceColor(themeColor:"Accent5-Medium")] */ border-left:1px solid #6f9dd9; padding-right:2px; height:100%; } .ms-areaseparator{ /* [ReplaceColor(themeColor:"Accent4-Lightest")] */ background-color:#ffeaad; border-right:none; border-left:none; padding-left:5px; height:61px; } .ms-pagemargin{ background-color:#83b0ec; height:100%; } td.ms-rightareacell div.ms-pagemargin{ /* [ReplaceColor(themeColor:"Accent3-Medium")] */ background-color:#83b0ec; height:100%; /* [ReplaceColor(themeColor:"Accent3-Medium")] */ border-left:solid 1px #83b0ec; } .ms-bodyareacell{ vertical-align:top; } .ms-pagebottommargin,.ms-pagebottommarginleft,.ms-pagebottommarginright{ /* [ReplaceColor(themeColor:"Accent3-Medium")] */ background:#83b0ec; } .ms-bodyareapagemargin{ /* [ReplaceColor(themeColor:"Accent3-Medium")] */ background:#83b0ec; /* [ReplaceColor(themeColor:"Accent3-Lighter")] */ border-top:1px solid #6f9dd9; } .ms-bodyareaframe{ vertical-align:top; height:100%; /* [ReplaceColor(themeColor:"Light1")] */ background-color:#ffffff; /* [ReplaceColor(themeColor:"Accent3-Lighter")] */ border:1px solid #6f9dd9; } .ms-bodyareaframe{ padding:10px; } .ms-pagetitle{ /* [ReplaceColor(themeColor:"Dark1")] */ color:#000000; font-family:verdana; font-size:16pt; margin:0px 0px 4px 0px; font-weight:normal; } .ms-pagetitle a{ text-decoration:none; /* [ReplaceColor(themeColor:"Dark1")] */ color:#000000; margin:0; font-weight:normal; } .ms-pagetitle a:hover{ } .ms-vh table.ms-selectedtitle,.ms-vh2 table.ms-selectedtitle,.ms-vh-icon table.ms-selectedtitle,.ms-vh table.ms-unselectedtitle,.ms-vh2 table.ms-unselectedtitle,.ms-vh-icon table.ms-unselectedtitle{ height:21px; } .ms-vh table.ms-selectedtitle,.ms-vh2 table.ms-selectedtitle,.ms-vh-icon table.ms-selectedtitle{ /* [ReplaceColor(themeColor:"Light1-Lighter")] */ background-color:#dde1e5; border:none; } .ms-vh2 .ms-selectedtitle .ms-vb,.ms-vh2 .ms-unselectedtitle .ms-vb{ padding-left:5px; padding-right:5px; padding-top:1px; } .ms-vh-icon .ms-selectedtitle .ms-vb,.ms-vh-icon .ms-unselectedtitle .ms-vb{ padding-left:0px; vertical-align:middle; } .ms-propertysheet th.ms-vh2,.ms-propertysheet th.ms-vh2-nofilter{ font-family:tahoma; } .ms-listviewtable .ms-vh2,.ms-summarystandardbody .ms-vh2{ padding:1px 1px 0px 1px; } .ms-listviewtable .ms-vb2,.ms-summarystandardbody .ms-vb2{ padding-left:2px; padding-right:7px; } .ms-selectedtitle{ /* [ReplaceColor(themeColor:"Light1")] */ background-color:#ffffff; /* [ReplaceColor(themeColor:"Accent4-Darker")] */ border:1px solid #b09460; margin:0px; padding:0px; cursor:pointer; } .ms-selectedtitlealternative { /* [ReplaceColor(themeColor:"Light1")] */ background-color:#ffffff; /* [ReplaceColor(themeColor:"Accent4-Darker")] */ border:1px solid #b09460; margin:0px; padding:0px; cursor:pointer; } .ms-unselectedtitle{ background-color:transparent; margin:0px; padding:0px; } .ms-newgif{ display:inline-block; margin-left:5px; } .ms-menuimagecell{ /* [RecolorImage(themeColor:"Accent1",method:"Tinting")] */ background:url("/_layouts/images/selectednav.gif") repeat-x; /* [ReplaceColor(themeColor:"Accent6-Lighter")] */ background-color:#ffe6a0; cursor:pointer; /* [ReplaceColor(themeColor:"Light1")] */ border:solid 1px #ffffff; padding:0px; height:18px; } .ms-vh .ms-menuimagecell,.ms-vh2 .ms-menuimagecell,.ms-vh-icon .ms-menuimagecell{ height:20px; } .ms-vh .ms-menuimagecell img,.ms-vh2 .ms-menuimagecell img,.ms-vh-icon .ms-menuimagecell img{ margin-top:2px; margin-bottom:2px; } .ms-descriptiontext{ /* [ReplaceColor(themeColor:"Dark1-Medium")] */ color:#4c4c4c; font-family:tahoma; font-size:8pt; text-align:left; } .ms-statusdescriptiontext { color:#4c4c4c; background-color:#FFFF00; font-family:tahoma; font-size:8pt; text-align:left; } .ms-webpartpagedescription{ font-family:verdana; font-size:8pt; /* [ReplaceColor(themeColor:"Dark1-Lighter")] */ color:#5a5a5a; padding:8px 12px 0px 12px; } .ms-separator { /* [ReplaceColor(themeColor:"Light2",themeShade:"0.02")] */ color:#f1f1f2; background-repeat:repeat-x; border:none; padding-left:4px; font-size:10pt; } .ms-rtetoolbarmenu .ms-separator{ padding-left:0px !important; /* [ReplaceColor(themeColor:"Accent3-Medium")] */ color:#83b0ec; } .ms-separator img { height:12px; width:1px; margin:0px 1px 0px 1px; /* [ReplaceColor(themeColor:"Light2",themeShade:"0.02")] */ background:#f1f1f2; } .ms-propertysheet th.ms-authoringcontrols { /* [ReplaceColor(themeColor:"Accent3-Lightest")] */ background-color:#f1f1f2; text-align:left; } table.ms-authoringcontrols > tbody > tr > td{ vertical-align:middle; } td.ms-authoringcontrols > label,td.ms-authoringcontrols > span > label,td.ms-authoringcontrols > table > tbody > tr > td > label{ vertical-align:middle; } .ms-propertysheet th.ms-linksectionheader { /* [ReplaceColor(themeColor:"Dark1")] */ color:#000000; font-family:tahoma; font-size:8pt; font-weight:bold; text-align:left; } .ms-linksectionitemdescription{ padding-left:3px; padding-top:7px; } .ms-propertysheet .ms-sectionheader a,.ms-propertysheet .ms-sectionheader a:hover { /* [ReplaceColor(themeColor:"Dark1-Lighter")] */ color:#525252; text-decoration:none; } .ms-partline { height:3px; /* [ReplaceColor(themeColor:"Dark2",themeTint:"0.17")] */ border-bottom:1px solid #EBEBEB; } .ms-propertysheet{ font-family:verdana; font-size:1em; text-align:left; /* [ReplaceColor(themeColor:"Dark1-Medium")] */ color:#4c4c4c; } .ms-propertysheet th{ font-family:verdana; font-size:8pt; /* [ReplaceColor(themeColor:"Dark1-Medium")] */ color:#4c4c4c; font-weight:normal; } .ms-propertysheet a{ text-decoration:none; /* [ReplaceColor(themeColor:"Accent3-Darker")] */ color:#3966bf; } .ms-propertysheet a:hover{ text-decoration:underline; /* [ReplaceColor(themeColor:"Dark1")] */ color:#000000; } .ms-vh,.ms-vh2,.ms-vh-icon-empty,.ms-vhImage,.ms-vh2-nograd,.ms-vh3-nograd,.ms-vh2-nograd-icon,.ms-vh2-nofilter-icon,.ms-ph{ font-weight:normal; /* [ReplaceColor(themeColor:"Light1-Medium")] */ color:#b2b2b2; text-align:left; text-decoration:none; vertical-align:top; } .ms-vh-icon{ vertical-align:middle; } .ms-gb,.ms-gb2,.ms-gbload,.ms-vb-tall,.ms-vb-user,.ms-pb,.ms-pb-selected td{ /* [ReplaceColor(themeColor:"Dark1")] */ color:#000000; } .ms-gb a,.ms-gb2 a{ /* [ReplaceColor(themeColor:"Accent3")] */ color:#003399; } .ms-vh,.ms-vh2,.ms-vh-icon,.ms-vh-icon-empty,.ms-vhImage,.ms-gb,.ms-gb2,.ms-gbload,.ms-vb,.ms-vb2,.ms-vb-tall,.ms-vb-user,.ms-vh2-nograd,.ms-vh3-nograd,.ms-vh2-nograd-icon,.ms-vh2-nofilter-icon,.ms-pb,.ms-pb-selected,.ms-ph{ font-size:8pt; line-height:1.2; font-family:Verdana,Helvetica,sans-serif; } .ms-vh,.ms-vh2,.ms-vh2-nograd,.ms-vh3-nograd,.ms-vh2-nograd-icon,.ms-vh2-nofilter-icon,.ms-ph{ white-space:nowrap; } .ms-vh,.ms-vh2,.ms-vh-icon,.ms-vh2-nofilter-icon,.ms-viewheadertr .ms-vh-group,.ms-vh2-nograd,.ms-vh3-nograd,.ms-vh2-nograd-icon,.ms-ph,.ms-pickerresultheadertr{ background-repeat:repeat-x; padding-top:1px; padding-bottom:0px; } .ms-viewheadertr th{ padding-top:5px !important; } .ms-disc .ms-viewheadertr th.ms-vh2{ padding:1px 5px 0px 4px; } .ms-disc .ms-vh2 .ms-selectedtitle .ms-vb,.ms-disc .ms-vh2 .ms-unselectedtitle .ms-vb{ padding-left:4px; } th.ms-vh3-nograd{ width:12px; /* [ReplaceColor(themeColor:"Light1-Darker")] */ color:#949494; font-size:8pt; font-family:tahoma,sans-serif; } .ms-vh .ms-vh{ background-image:none; border-left:none; padding-left:1px; background-color:transparent; } .ms-vh2,.ms-ph{ padding:3px 8px 1px; } .ms-vh-div{ padding-top:5px; } .ms-vh-icon,.ms-vh2-nograd-icon,.ms-vh2-nofilter-icon{ width:12px; } .ms-vh-icon{ padding-left:6px; padding-right:4px; padding-bottom:3px; } .ms-vh-icon-empty{ width:0px; } .ms-vh a,.ms-vh a:visited,.ms-vh2 a{ /* [ReplaceColor(themeColor:"Dark1-Lightest")] */ color:#7f7f7f; text-decoration:none; } .ms-vh a:hover,.ms-vh2 a:hover{ text-decoration:underline; } .ms-imnImgTD { padding-right:2px; padding-bottom:5px; } .ms-vhltr .ms-imnImgTD { padding-right:2px; } .ms-vhrtl .ms-imnImgTD { padding-left:2px; } .ms-imnTxtTD { padding-top:0px; } .ms-vhImage{ width:18pt } .ms-standardheader{ font-size:1em; margin:0em; text-align:left; /* [ReplaceColor(themeColor:"Dark1")] */ color:#525252; } .ms-formlabel h3.ms-standardheader{ font-weight:normal; color:auto; } .ms-linksectionheader .ms-standardheader{ /* [ReplaceColor(themeColor:"Dark1")] */ color:#000000; } .ms-gb{ height:22px; /* [ReplaceColor(themeColor:"Light1")] */ background-color:#ffffff; font-weight:bold; /* [ReplaceColor(themeColor:"Accent3-Lighter")] */ border-bottom:1px solid #8ebbf5; /* [ReplaceColor(themeColor:"Light1-Lightest")] */ border-top:1px solid #f9f9f9; padding-bottom:3px; } .ms-gb .ms-vb2{ font-weight:normal; } .ms-listviewtable .ms-gb,.ms-listviewtable .ms-gb2{ padding-top:14px; } .ms-gb2{ height:22px; /* [ReplaceColor(themeColor:"Dark1-Medium")] */ color:#4c4c4c; padding-bottom:3px; /* [ReplaceColor(themeColor:"Accent3-Lightest")] */ border-bottom:1px solid #e3efff; /* [ReplaceColor(themeColor:"Light1-Lightest")] */ border-top:1px solid #f9f9f9; } .ms-gbload{ height:22px; /* [ReplaceColor(themeColor:"Dark1-Medium")] */ color:#4c4c4c; /* [ReplaceColor(themeColor:"Light1")] */ background-color:#ffffff; padding-bottom:3px; } .ms-vb,.ms-vb2,.ms-vb-user,.ms-vb-tall,.ms-pb,.ms-pb-selected { /* [ReplaceColor(themeColor:"Dark1")] */ color:#6d6f72; vertical-align:top; } .ms-vb a:link,.ms-vb2 a:link,.ms-vb-user a:link{ /* [ReplaceColor(themeColor:"Hyperlink")] */ color:#0072BC; text-decoration:none; } .ms-vb a:hover,.ms-vb2 a:hover,.ms-vb-user a:hover{ text-decoration:underline; } .ms-vb a:visited,.ms-vb2 a:visited,.ms-vb-user a:visited{ /* [ReplaceColor(themeColor:"Hyperlink")] */ color:#0072BC; text-decoration:none; } .ms-vb a:visited:hover,.ms-vb2 a:visited:hover,.ms-vb-user a:visited:hover{ /* [ReplaceColor(themeColor:"Hyperlink")] */ color:#0072BC; text-decoration:underline; } .ms-alternatingstrong .ms-vb a:link,.ms-alternatingstrong .ms-vb2 a:link,.ms-alternatingstrong .ms-vb-user a:link,.ms-alternatingstrong .ms-vb a:visited,.ms-alternatingstrong .ms-vb2 a:visited,.ms-alternatingstrong .ms-vb-user a:visited,.ms-alternatingstrong .ms-vb a:visited:hover,.ms-alternatingstrong .ms-vb2 a:visited:hover,.ms-alternatingstrong .ms-vb-user a:visited:hover{ /* [ReplaceColor(themeColor

    Read the article

  • Implementing Release Notes in TFS Team Build 2010

    - by Jakob Ehn
    In TFS Team Build (all versions), each build is associated with changesets and work items. To determine which changesets that should be associated with the current build, Team Build finds the label of the “Last Good Build” an then aggregates all changesets up unitl the label for the current build. Basically this means that if your build is failing, every changeset that is checked in will be accumulated in this list until the build is successful. All well, but there uis a dimension missing here, regarding to releases. Often you can run several release builds until you actually deploy the result of the build to a test or production system. When you do this, wouldn’t it be nice to be able to send the customer a nice release note that contain all work items and changeset since the previously deployed version? At our company, we have developed a Release Repository, which basically is a siple web site with a SQL database as storage. Every time we run a Release Build, the resulting installers, zip-files, sql scripts etc, gets pushed into the release repositor together with the relevant build information. This information contains things such as start time, who triggered the build etc. Also, it contains the associated changesets and work items. When deploying the MSI’s for a new version, we mark the build as Deployed in the release repository. The depoyed status is stored in the release repository database, but it could also have been implemented by setting the Build Quality for that build to Deployed. When generating the release notes, the web site simple runs through each release build back to the previous build that was marked as Deplyed, and aggregates the work items and changesets: Here is a sample screenshot on how this looks for a sample build/application The web site is available both for us and also for the customers and testers, which means that they can easily get the latest version of a particular application and at the same time see what changes are included in this version. There is a lot going on in the Release Build Process that drives this in our TFS 2010 server, but in this post I will show how you can access and read the changeset and work item information in a custom activity. Since Team Build associates changesets and work items for each build, this information is (partially) available inside the build process template. The Associate Changesets and Work Items for non-Shelveset Builds activity (located inside the Try  Compile, Test, and Associate Changesets and Work Items activity) defines and populates a variable called associatedWorkItems   You can see that this variable is an IList containing instances of the Changeset class (from the Microsoft.TeamFoundation.VersionControl.Client namespace). Now, if you want to access this variable later on in the build process template, you need to declare a new variable in the corresponding scope and the assign the value to this variable. In this sample, I declared a variable called assocChangesets in the RunAgent sequence, which basically covers the whol compile, test and drop part of the build process:   Now, you need to assign the value from the AssociatedChangesets to this variable. This is done using the Assign workflow activity:   Now you can add a custom activity any where inside the RunAgent sequence and use this variable. NB: Of course your activity must place somewhere after the variable has been poplated. To finish off, here is code snippet that shows how you can read the changeset and work item information from the variable.   First you add an InArgumet on your activity where you can pass i the variable that we defined. [RequiredArgument] public InArgument<IList<Changeset>> AssociatedChangesets { get; set; } Then you can traverse all the changesets in the list, and for each changeset use the WorkItems property to get the work items that were associated in that changeset: foreach (Changeset ch in associatedChangesets) { // Add change theChangesets.Add( new AssociatedChangeset(ch.ChangesetId, ch.ArtifactUri, ch.Committer, ch.Comment, ch.ChangesetId)); foreach (var wi in ch.WorkItems) { theWorkItems.Add( new AssociatedWorkItem(wi["System.AssignedTo"].ToString(), wi.Id, wi["System.State"].ToString(), wi.Title, wi.Type.Name, wi.Id, wi.Uri)); } } NB: AssociatedChangeset and AssociatedWorkItem are custom classes that we use internally for storing this information that is eventually pushed to the release repository.

    Read the article

  • SQL SERVER – Simple Example of Incremental Statistics – Performance improvements in SQL Server 2014 – Part 2

    - by Pinal Dave
    This is the second part of the series Incremental Statistics. Here is the index of the complete series. What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1 Simple Example of Incremental Statistics – Performance improvements in SQL Server 2014 – Part 2 DMV to Identify Incremental Statistics – Performance improvements in SQL Server 2014 – Part 3 In part 1 we have understood what is incremental statistics and now in this second part we will see a simple example of incremental statistics. This blog post is heavily inspired from my friend Balmukund’s must read blog post. If you have partitioned table and lots of data, this feature can be specifically very useful. Prerequisite Here are two things you must know before you start with the demonstrations. AdventureWorks – For the demonstration purpose I have installed AdventureWorks 2012 as an AdventureWorks 2014 in this demonstration. Partitions – You should know how partition works with databases. Setup Script Here is the setup script for creating Partition Function, Scheme, and the Table. We will populate the table based on the SalesOrderDetails table from AdventureWorks. -- Use Database USE AdventureWorks2014 GO -- Create Partition Function CREATE PARTITION FUNCTION IncrStatFn (INT) AS RANGE LEFT FOR VALUES (44000, 54000, 64000, 74000) GO -- Create Partition Scheme CREATE PARTITION SCHEME IncrStatSch AS PARTITION [IncrStatFn] TO ([PRIMARY], [PRIMARY], [PRIMARY], [PRIMARY], [PRIMARY]) GO -- Create Table Incremental_Statistics CREATE TABLE [IncrStatTab]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [ModifiedDate] [datetime] NOT NULL) ON IncrStatSch(SalesOrderID) GO -- Populate Table INSERT INTO [IncrStatTab]([SalesOrderID], [SalesOrderDetailID], [CarrierTrackingNumber], [OrderQty], [ProductID], [SpecialOfferID], [UnitPrice],   [UnitPriceDiscount], [ModifiedDate]) SELECT     [SalesOrderID], [SalesOrderDetailID], [CarrierTrackingNumber], [OrderQty], [ProductID], [SpecialOfferID], [UnitPrice],   [UnitPriceDiscount], [ModifiedDate] FROM       [Sales].[SalesOrderDetail] WHERE      SalesOrderID < 54000 GO Check Details Now we will check details in the partition table IncrStatSch. -- Check the partition SELECT * FROM sys.partitions WHERE OBJECT_ID = OBJECT_ID('IncrStatTab') GO You will notice that only a few of the partition are filled up with data and remaining all the partitions are empty. Now we will create statistics on the Table on the column SalesOrderID. However, here we will keep adding one more keyword which is INCREMENTAL = ON. Please note this is the new keyword and feature added in SQL Server 2014. It did not exist in earlier versions. -- Create Statistics CREATE STATISTICS IncrStat ON [IncrStatTab] (SalesOrderID) WITH FULLSCAN, INCREMENTAL = ON GO Now we have successfully created statistics let us check the statistical histogram of the table. Now let us once again populate the table with more data. This time the data are entered into a different partition than earlier populated partition. -- Populate Table INSERT INTO [IncrStatTab]([SalesOrderID], [SalesOrderDetailID], [CarrierTrackingNumber], [OrderQty], [ProductID], [SpecialOfferID], [UnitPrice],   [UnitPriceDiscount], [ModifiedDate]) SELECT     [SalesOrderID], [SalesOrderDetailID], [CarrierTrackingNumber], [OrderQty], [ProductID], [SpecialOfferID], [UnitPrice],   [UnitPriceDiscount], [ModifiedDate] FROM       [Sales].[SalesOrderDetail] WHERE      SalesOrderID > 54000 GO Let us check the status of the partition once again with following script. -- Check the partition SELECT * FROM sys.partitions WHERE OBJECT_ID = OBJECT_ID('IncrStatTab') GO Statistics Update Now here has the new feature come into action. Previously, if we have to update the statistics, we will have to FULLSCAN the entire table irrespective of which partition got the data. However, in SQL Server 2014 we can just specify which partition we want to update in terms of Statistics. Here is the script for the same. -- Update Statistics Manually UPDATE STATISTICS IncrStatTab (IncrStat) WITH RESAMPLE ON PARTITIONS(3, 4) GO Now let us check the statistics once again. -- Show Statistics DBCC SHOW_STATISTICS('IncrStatTab', IncrStat) WITH HISTOGRAM GO Upon examining statistics histogram, you will notice that now the distribution has changed and there is way more rows in the histogram. Summary The new feature of Incremental Statistics is indeed a boon for the scenario where there are partitions and statistics needs to be updated frequently on the partitions. In earlier version to update statistics one has to do FULLSCAN on the entire table which was wasting too many resources. With the new feature in SQL Server 2014, now only those partitions which are significantly changed can be specified in the script to update statistics. Cleanup You can clean up the database by executing following scripts. -- Clean up DROP TABLE [IncrStatTab] DROP PARTITION SCHEME [IncrStatSch] DROP PARTITION FUNCTION [IncrStatFn] GO Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Statistics, Statistics

    Read the article

  • Design Pattern for Complex Data Modeling

    - by Aaron Hayman
    I'm developing a program that has a SQL database as a backing store. As a very broad description, the program itself allows a user to generate records in any number of user-defined tables and make connections between them. As for specs: Any record generated must be able to be connected to any other record in any other user table (excluding itself...the record, not the table). These "connections" are directional, and the list of connections a record has is user ordered. Moreover, a record must "know" of connections made from it to others as well as connections made to it from others. The connections are kind of the point of this program, so there is a strong possibility that the number of connections made is very high, especially if the user is using the software as intended. A record's field can also include aggregate information from it's connections (like obtaining average, sum, etc) that must be updated on change from another record it's connected to. To conserve memory, only relevant information must be loaded at any one time (can't load the entire database in memory at load and go from there). I cannot assume the backing store is local. Right now it is, but eventually this program will include syncing to a remote db. Neither the user tables, connections or records are known at design time as they are user generated. I've spent a lot of time trying to figure out how to design the backing store and the object model to best fit these specs. In my first design attempt on this, I had one object managing all a table's records and connections. I attempted this first because it kept the memory footprint smaller (records and connections were simple dicts), but maintaining aggregate and link information between tables became....onerous (ie...a huge spaghettified mess). Tracing dependencies using this method almost became impossible. Instead, I've settled on a distributed graph model where each record and connection is 'aware' of what's around it by managing it own data and connections to other records. Doing this increases my memory footprint but also let me create a faulting system so connections/records aren't loaded into memory until they're needed. It's also much easier to code: trace dependencies, eliminate cycling recursive updates, etc. My biggest problem is storing/loading the connections. I'm not happy with any of my current solutions/ideas so I wanted to ask and see if anybody else has any ideas of how this should be structured. Connections are fairly simple. They contain: fromRecordID, fromTableID, fromRecordOrder, toRecordID, toTableID, toRecordOrder. Here's what I've come up with so far: Store all the connections in one big table. If I do this, either I load all connections at once (one big db call) or make a call every time a user table is loaded. The big issue here: the size of the connections table has the potential to be huge, and I'm afraid it would slow things down. Store in separate tables all the outgoing connections for each user table. This is probably the worst idea I've had. Now my connections are 'spread out' over multiple tables (one for each user table), which means I have to make a separate DB called to each table (or make a huge join) just to find all the incoming connections for a particular user table. I've avoided making "one big ass table", but I'm not sure the cost is worth it. Store in separate tables all outgoing AND incoming connections for each user table (using a flag to distinguish between incoming vs outgoing). This is the idea I'm leaning towards, but it will essentially double the total DB storage for all the connections (as each connection will be stored in two tables). It also means I have to make sure connection information is kept in sync in both places. This is obviously not ideal but it does mean that when I load a user table, I only need to load one 'connection' table and have all the information I need. This also presents a separate problem, that of connection object creation. Since each user table has a list of all connections, there are two opportunities for a connection object to be made. However, connections objects (designed to facilitate communication between records) should only be created once. This means I'll have to devise a common caching/factory object to make sure only one connection object is made per connection. Does anybody have any ideas of a better way to do this? Once I've committed to a particular design pattern I'm pretty much stuck with it, so I want to make sure I've come up with the best one possible.

    Read the article

  • Oracle data warehouse design - fact table acting as a dimension?

    - by Elizabeth
    THANKS: Both answers here are very helpful, but I could only pick one. I really appreciate the advice! our datawarehouse will be used more for workflow reports than traditional analytical reports. Our users care about "current picture" far more than history. (though history matters, too.) We are a government entity that does not have costs or related calculations. Mostly just counts of people within given locations and with related history. We are using Oracle, and I have found distinct advantage in using the star join whenever possible and would like to rearchitect everything to as closely resemble the star schema as is reasonable for our business uses. Speed in this DW is vital, and a number of tests have already proven the star schema approach to me. Our "person" table is key - it contains over 4 million records and will be the most frequently used source in queries. It can be seen at the center of a star with multiple dimensions (like age, gender, affiliation, location, etc.). It is a very LONG table, particularly when I join it to the address and contact information. However, it is more like a dimension table when we start looking at history. For example, there are two different history tables that have a person key pointing to the person table. One has over 20 million records and the other has almost 50 million and grows daily. Is this table a fact table or a dimension table? Can one work as both? If so, is that going to be a big performance problem? Is it common to query more off of a dimension than a fact? What happens if a DIFFERENT fact table that uses the person table as a dimension is actually only 60,000 records (much smaller.). I think my problem is that our data and use of it does not fit with the commonly use examples of star schemas. CLARIFICATION: Some good thoughts have been added below, but perhaps I left too much out to really explain well. Here's some more info: We handle a voter database. We don't have any measures except voter counts by various groups: voter counts by party, by age, by location; voter counts by ballot type and election, by ballot status and election, etc. We do have a "voting history" log as well as an activity audit log (change of address, party, etc.). We have information on which voters are election workers and all that related information. I figure I'll get to the peripheral stuff later. For now I'm focusing on our two major "business processes": voter registration(which IS a voter.) and election turnout. In the first, voter is a fact. In the second, voter is a dimension, along with party, election, and type of ballot. (and in case anyone is worried - no we don't know HOW people vote. Just that they do. LOL ) I hope that clarifies things a bit.

    Read the article

  • Cannot create a row of size 8074 which is greater than the allowable maximum row size of 8060.

    - by Lieven Cardoen
    I have already asked a question about this, but the problems keeps on hitting me ;-) I have two tables that are identical. I want to add a xml column. In the first table this is no problem, but in the second table I get the sqlException (title). However, apart from the data in it, they are the same. So, can I get the sqlException because of data in the table? I have also tried to store the field off page with EXEC sp_tableoption 'dbo.PackageSessionNodesFinished', 'large value types out of row', 1 but without any succes. The same SqlException keeps coming. First table: PackageSessionNodes CREATE TABLE [dbo].[PackageSessionNodes]( [PackageSessionNodeId] [int] IDENTITY(1,1) NOT NULL, [PackageSessionId] [int] NOT NULL, [TreeNodeId] [int] NOT NULL, [Duration] [int] NULL, [Score] [float] NOT NULL, [ScoreMax] [float] NOT NULL, [Interactions] [xml] NOT NULL, [BrainTeaser] [bit] NULL, [DateCreated] [datetime] NULL, [CompletionStatus] [int] NOT NULL, [ReducedScore] [float] NOT NULL, [ReducedScoreMax] [float] NOT NULL, [ContentInteractions] [xml] NOT NULL, CONSTRAINT [PK_PackageSessionNodes] PRIMARY KEY CLUSTERED ( [PackageSessionNodeId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] Second table: PackageSessionNodesFinished CREATE TABLE [dbo].[PackageSessionNodesFinished]( [PackageSessionNodeFinishedId] [int] IDENTITY(1,1) NOT NULL, [PackageSessionId] [int] NOT NULL, [TreeNodeId] [int] NOT NULL, [Duration] [int] NULL, [Score] [float] NOT NULL, [ScoreMax] [float] NOT NULL, [Interactions] [xml] NOT NULL, [BrainTeaser] [bit] NULL, [DateCreated] [datetime] NULL, [CompletionStatus] [int] NOT NULL, [ReducedScore] [float] NOT NULL, [ReducedScoreMax] [float] NOT NULL, [ContentInteractions] [xml] NULL, CONSTRAINT [PK_PackageSessionNodesFinished] PRIMARY KEY CLUSTERED ( [PackageSessionNodeFinishedId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] First script I tried to run (First two ALTER TABLE work fine, the third crashes on SqlException) ALTER TABLE dbo.PackageSessionNodes ADD ContentInteractions xml NOT NULL CONSTRAINT DF_PackageSessionNodes_ContentInteractions DEFAULT (('<contentinteractions/>')); ALTER TABLE dbo.PackageSessionNodes DROP CONSTRAINT DF_PackageSessionNodes_ContentInteractions ALTER TABLE dbo.PackageSessionNodesFinished ADD ContentInteractions xml NOT NULL CONSTRAINT DF_PackageSessionNodesFinished_ContentInteractions DEFAULT (('<contentinteractions/>')); ALTER TABLE dbo.PackageSessionNodesFinished DROP CONSTRAINT DF_PackageSessionNodesFinished_ContentInteractions Second script I tried to run with the same result as previous script: EXEC sp_tableoption 'dbo.PackageSessionNodes', 'large value types out of row', 1 ALTER TABLE dbo.PackageSessionNodes ADD ContentInteractions xml NOT NULL CONSTRAINT DF_PackageSessionNodes_ContentInteractions DEFAULT (('<contentinteractions/>')); ALTER TABLE dbo.PackageSessionNodes DROP CONSTRAINT DF_PackageSessionNodes_ContentInteractions EXEC sp_tableoption 'dbo.PackageSessionNodesFinished', 'large value types out of row', 1 ALTER TABLE dbo.PackageSessionNodesFinished ADD ContentInteractions xml NOT NULL CONSTRAINT DF_PackageSessionNodesFinished_ContentInteractions DEFAULT (('<contentinteractions/>')); ALTER TABLE dbo.PackageSessionNodesFinished DROP CONSTRAINT DF_PackageSessionNodesFinished_ContentInteractions Now, In PackageSessionNodes there are 234 records, in PackageSessionNodesFinished there are 4256946 records. Really would appreciate some help here as I'm stuck.

    Read the article

  • Regular expression does not find the first occurrence

    - by scharan
    I have the following input to a perl script and I wish to get the first occurrence of NAME="..." strings in each of the ... structures. The entire file is read into a single string and the reg exp acts on that input. However, the regex always returns the LAST occurrence of NAME="..." strings. Can anyone explain what is going on and how this can be fixed? Input file: ADSDF <TABLE> NAME="ORDERSAA" line1 line2 NAME="ORDERSA" line3 NAME="ORDERSAB" </TABLE> <TABLE> line1 line2 NAME="ORDERSB" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSC" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSD" line3 line3 line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES2" line3 NAME="QUOTES3" NAME="QUOTES4" line3 NAME="QUOTES5" line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES6" NAME="QUOTES7" NAME="QUOTES8" NAME="QUOTES9" line3 line3 </TABLE> <TABLE> NAME="MyName IsKhan" </TABLE> Perl Code starts here: use warnings; use strict; my $nameRegExp = '(<table>((NAME="(.+)")|(.*|\n))*</table>)'; sub extractNames($$){ my ($ifh, $ofh) = @_; my $fullFile; read ($ifh, $fullFile, 1024);#Hardcoded to read just 1024 bytes. while( $fullFile =~ m#$nameRegExp#gi){ print "found: ".$4."\n"; } } sub main(){ if( ($#ARGV + 1 )!= 1){ die("Usage: extractNames infile\n"); } my $infileName = $ARGV[0]; my $outfileName = $ARGV[1]; open my $inFile, "<$infileName" or die("Could not open log file $infileName"); my $outFile; #open my $outFile, ">$outfileName" or die("Could not open log file $outfileName"); extractNames( $inFile, $outFile ); close( $inFile ); #close( $outFile ); } #call main();

    Read the article

  • Regular expression does not find the first occurance

    - by scharan
    I have the following input to a perl script and I wish to get the first occurrence of NAME="..." strings in each of the ... structures. The entire file is read into a single string and the reg exp acts on that input. However, the regex always returns the LAST occurrence of NAME="..." strings. Can anyone explain what is going on and how this can be fixed? Input file: ADSDF <TABLE> NAME="ORDERSAA" line1 line2 NAME="ORDERSA" line3 NAME="ORDERSAB" </TABLE> <TABLE> line1 line2 NAME="ORDERSB" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSC" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSD" line3 line3 line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES2" line3 NAME="QUOTES3" NAME="QUOTES4" line3 NAME="QUOTES5" line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES6" NAME="QUOTES7" NAME="QUOTES8" NAME="QUOTES9" line3 line3 </TABLE> <TABLE> NAME="MyName IsKhan" </TABLE> Perl Code starts here: use warnings; use strict; my $nameRegExp = '(<table>((NAME="(.+)")|(.*|\n))*</table>)'; sub extractNames($$){ my ($ifh, $ofh) = @_; my $fullFile; read ($ifh, $fullFile, 1024);#Hardcoded to read just 1024 bytes. while( $fullFile =~ m#$nameRegExp#gi){ print "found: ".$4."\n"; } } sub main(){ if( ($#ARGV + 1 )!= 1){ die("Usage: extractNames infile\n"); } my $infileName = $ARGV[0]; my $outfileName = $ARGV[1]; open my $inFile, "<$infileName" or die("Could not open log file $infileName"); my $outFile; #open my $outFile, ">$outfileName" or die("Could not open log file $outfileName"); extractNames( $inFile, $outFile ); close( $inFile ); #close( $outFile ); } #call main();

    Read the article

  • Odd SQL behavior, I'm wondering why this works the way it does.

    - by Matthew Vines
    Consider the following Transact sql. DECLARE @table TABLE(val VARCHAR(255) NULL) INSERT INTO @table (val) VALUES('a') INSERT INTO @table (val) VALUES('b') INSERT INTO @table (val) VALUES('c') INSERT INTO @table (val) VALUES('d') INSERT INTO @table (val) VALUES(NULL) select val from @table where val not in ('a') I would expect this to return b, c, d, NULL but instead it returns b, c, d Why is this the case? Is NULL not evaluated? Is NULL somehow in the set 'a'?

    Read the article

  • SQL Query Theory Question...

    - by Keng
    I have a large historical transaction table (15-20 million rows MANY columns) and a table with one row one column. The table with one row contains a date (last processing date) which will be used to pull the data in the trasaction table ('process_date'). Question: Should I inner join the 'process_date' table to the transaction table or the transaction table to the 'process_date' table?

    Read the article

  • Benchmark MySQL Cluster using flexAsynch: No free node id found for mysqld(API)?

    - by quanta
    I am going to benchmark MySQL Cluster using flexAsynch follow this guide, details as below: mkdir /usr/local/mysqlc732/ cd /usr/local/src/mysql-cluster-gpl-7.3.2 cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/mysqlc732/ -DWITH_NDB_TEST=ON make make install Everything works fine until this step: # /usr/local/mysqlc732/bin/flexAsynch -t 1 -p 80 -l 2 -o 100 -c 100 -n FLEXASYNCH - Starting normal mode Perform benchmark of insert, update and delete transactions 1 number of concurrent threads 80 number of parallel operation per thread 100 transaction(s) per round 2 iterations Load Factor is 80% 25 attributes per table 1 is the number of 32 bit words per attribute Tables are with logging Transactions are executed with hint provided No force send is used, adaptive algorithm used Key Errors are disallowed Temporary Resource Errors are allowed Insufficient Space Errors are disallowed Node Recovery Errors are allowed Overload Errors are allowed Timeout Errors are allowed Internal NDB Errors are allowed User logic reported Errors are allowed Application Errors are disallowed Using table name TAB0 NDBT_ProgramExit: 1 - Failed ndb_cluster.log: WARNING -- Failed to allocate nodeid for API at 127.0.0.1. Returned eror: 'No free node id found for mysqld(API).' I also have recompiled with -DWITH_DEBUG=1 -DWITH_NDB_DEBUG=1. How can I run flexAsynch in the debug mode? # /usr/local/mysqlc732/bin/flexAsynch -h FLEXASYNCH Perform benchmark of insert, update and delete transactions Arguments: -t Number of threads to start, default 1 -p Number of parallel transactions per thread, default 32 -o Number of transactions per loop, default 500 -l Number of loops to run, default 1, 0=infinite -load_factor Number Load factor in index in percent (40 -> 99) -a Number of attributes, default 25 -c Number of operations per transaction -s Size of each attribute, default 1 (PK is always of size 1, independent of this value) -simple Use simple read to read from database -dirty Use dirty read to read from database -write Use writeTuple in insert and update -n Use standard table names -no_table_create Don't create tables in db -temp Create table(s) without logging -no_hint Don't give hint on where to execute transaction coordinator -adaptive Use adaptive send algorithm (default) -force Force send when communicating -non_adaptive Send at a 10 millisecond interval -local 1 = each thread its own node, 2 = round robin on node per parallel trans 3 = random node per parallel trans -ndbrecord Use NDB Record -r Number of extra loops -insert Only run inserts on standard table -read Only run reads on standard table -update Only run updates on standard table -delete Only run deletes on standard table -create_table Only run Create Table of standard table -drop_table Only run Drop Table on standard table -warmup_time Warmup Time before measurement starts -execution_time Execution Time where measurement is done -cooldown_time Cooldown time after measurement completed -table Number of standard table, default 0

    Read the article

  • jquery selector to count the number of visible table rows?

    - by sprugman
    I've got this html: <table> <tr style="display:table-row"><td>blah</td></tr> <tr style="display:none"><td>blah</td></tr> <tr style="display:none"><td>blah</td></tr> <tr style="display:table-row"><td>blah</td></tr> <tr style="display:table-row"><td>blah</td></tr> </table> I need to count the number of rows that don't have display:none. How can I do that?

    Read the article

  • how to localize a table with multiple text entries?

    - by rap-uvic
    Hello, I'm writing a web app which will allow creation of events. An event can have a title as well as a description amongst other things. The app needs to be multilingual. So I have 4 tables for localization: ResourceTypes, ResourceKeys, Resources, and Locales. A resource key can have multiple values in Resources table for different locales. So Resources is a many to many table between ResourceKeys and Locales. So in the event table I want to have a resourceKey for its title as well as a resourceKey for its description. So my question is, is it OK from database-design perspective to have two foreign keys from a table into another table? Has anybody used a better approach in such a scenario?

    Read the article

  • How to get elastic table next to a image?

    - by Pavel Chuchuva
    This is what I want: This is the best I could come up with: CSS img { background: red; float: left; } table { background: yellow; width: 90%; } HTML <img src="image.jpg" width="40" height="40" /> <table> <tr><td>Table</td></tr> </table> There is a problem with this approach. If you resize browser window at some point the table jumps below the image: click to view demo. What is the better way of achieving this layout?

    Read the article

  • Should I commit or rollback a transaction that creates a temp table, reads, then deletes it?

    - by Triynko
    To select information related to a list of hundreds of IDs... rather than make a huge select statement, I create temp table, insert the ids into it, join it with a table to select the rows matching the IDs, then delete the temp table. So this is essentially a read operation, with no permanent changes made to any persistent database tables. I do this in a transaction, to ensure the temp table is deleted when I'm finished. My question is... what happens when I commit such a transaction vs. let it roll it back? Performance-wise... does the DB engine have to do more work to roll back the transaction vs committing it? Is there even a difference since the only modifications are done to a temp table? Related question here, but doesn't answer my specific case involving temp tables: http://stackoverflow.com/questions/309834/should-i-commit-or-rollback-a-read-transaction

    Read the article

  • Asp:table bordercolor different html rendered for 1.1 and 2.0.

    - by Malcolm
    Hi I have the following markup .NET 1.1 app. I want the grid lines of the table to be darkgray this is the goal here. <asp:table id="tbl" Runat="server" CellSpacing="0" BorderColor="darkgray" GridLines="Both"></asp:table> I have the app in IIS set as ver 1.1 in my dev box and 2.0 in production for various reasons. The page source in 1.1 renders this <table id="ctlTimesheetMonthly_tbl" cellspacing="0" rules="all" bordercolor="DarkGray" border="1" style="border-color:DarkGray;border-collapse:collapse;">` 2.0 renders this <table id="ctlTimesheetMonthly_tbl" cellspacing="0" rules="all" border="1" style="border-color:DarkGray;border-collapse:collapse;"> Which is wrong as it produces a white border for some reason. Any idea how to get both the same?? Malcolm

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >