Search Results

Search found 4849 results on 194 pages for 'schema migration'.

Page 154/194 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Convert Yes/No/Null from SQL to True/False in a DataTable

    - by Scott Chamberlain
    I have a Sql Database (which I have no control over the schema) that has a Column that will have the varchar value of "Yes", "No", or it will be null. For the purpose of what I am doing null will be handled as No. I am programming in c# net 3.5 using a data table and table adapter to pull the data down. I would like to directly bind the column using a binding source to a check box I have in my program however I do not know how or where to put the logic to convert the string Yes/No/null to boolean True/False; Reading a null from the SQL server and writing back a No on a update is acceptable behavior. Any help is greatly appreciated. EDIT -- This is being developed for windows.

    Read the article

  • java memory allocation under linux

    - by pstanton
    I'm running 4 java processes with the following command: java -Xmx256m -jar ... and the system has 8Gb memory under fedora 12. however it is apparently going into swap. how can that be if 4 x 256m = 1Gb ? EDIT: also, how can all 8Gb of memory be used with so little memory allocated to basically the only thing running? is it java not garbage collecting because the OS tells it it doesn't need to or what? TOP: top - 20:13:57 up 3:55, 6 users, load average: 1.99, 2.54, 2.67 Tasks: 251 total, 6 running, 245 sleeping, 0 stopped, 0 zombie Cpu(s): 50.1%us, 2.9%sy, 0.0%ni, 45.1%id, 1.1%wa, 0.0%hi, 0.8%si, 0.0%st Mem: 8252304k total, 8195552k used, 56752k free, 34356k buffers Swap: 10354680k total, 74044k used, 10280636k free, 6624148k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1948 xxxxxxxx 20 0 1624m 240m 4020 S 96.8 3.0 164:33.75 java 1927 xxxxxxxx 20 0 139m 31m 27m R 91.8 0.4 38:34.55 postgres 1929 xxxxxxxx 20 0 1624m 200m 3984 S 86.2 2.5 183:24.88 java 1969 xxxxxxxx 20 0 1624m 292m 3984 S 65.6 3.6 154:06.76 java 1987 xxxxxxxx 20 0 137m 29m 27m R 28.5 0.4 75:49.82 postgres 1581 root 20 0 159m 18m 4712 S 22.5 0.2 52:42.54 Xorg 2411 xxxxxxxx 20 0 309m 9748 4544 S 20.9 0.1 45:05.08 gnome-system-mo 1947 xxxxxxxx 20 0 137m 28m 27m S 13.3 0.4 44:46.04 postgres 1772 xxxxxxxx 20 0 135m 25m 25m S 4.0 0.3 1:09.14 postgres 1966 xxxxxxxx 20 0 137m 29m 27m S 3.0 0.4 64:27.09 postgres 1773 xxxxxxxx 20 0 135m 732 624 S 1.0 0.0 0:24.86 postgres 2464 xxxxxxxx 20 0 15028 1156 744 R 0.7 0.0 0:49.14 top 344 root 15 -5 0 0 0 S 0.3 0.0 0:02.26 kdmflush 1 root 20 0 4124 620 524 S 0.0 0.0 0:00.88 init 2 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/0 4 root 15 -5 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/0

    Read the article

  • Looking for alternatives to the database project.

    - by Dave
    I've a fairly large database project which contains nine databases and one database with a fairly large schema. This project takes a large amount of time to build and I'm about to pull my hair out. We'd like to keep our database source controlled, but having a hard getting the other devs to use the project and build the database project before checking in just because it takes so long to build. It is seriously crippling our work so I'm look for alternatives. Maybe something can be done with Redgate's SQL Compare? I think maybe the only drawback here is that it doesn't validate syntax? Anyone's thoughts/suggestions would be most appreciated.

    Read the article

  • move data from one table to another, postgresql edition

    - by IggShaman
    Hi All, I'd like to move some data from one table to another (with a possibly different schema). Straightforward solution that comes into mind is - start a transaction with serializable isolation level; INSERT INTO dest_table SELECT data FROM orig_table,other-tables WHERE <condition>; DELETE FROM orig_table USING other-tables WHERE <condition>; COMMIT; Now what if the amount of data is rather big, and the <condition> is expensive to compute? In PostgreSQL, a RULE or a stored procedure can be used to delete data on the fly, evaluating condition only once. Which solution is better? Are there other options?

    Read the article

  • Will Visual Studio 2010 support HTML 5?

    - by Chris
    Since Visual Studio 2010 is slated for release in March of 2010 and HTML 5 is now starting to be used even more widely, I would like to know if Visual Studio will ship with HTML 5 templates, standard controls and support for the more common markup? A definition for support of HTML 5 would be that any new version of Visual Studio should have similar support for code-completion, validation and markup that is currently supported for HTML 4.01 and XHTML 1.0 and 1.1. Update From the Visual Web Develolper Team Blog: HTML 5 intellisense and validation schema for Visual Studio 2008 and Visual Web Developer is for downloading. Follow the instructions posted on the page to install the new scheme. Seems like the Visual Studio Team will be supporting HTML 5 after all.

    Read the article

  • Regex for xml parsing

    - by ogmios
    What is your opinon about following regexes - is it correct? To find element with spcific and required attribute "<(" + elem_name + ")(\s+(?:[^<]?\s+)" + attr_name + "\s*=\s*(['\"])((?:(?!\3).))\3[^<])(.*?)" To find element with spcific but optional attribute "<(" + elem_name + ")(\s*|\s+(?:[^<]?\s+)(?:" + attr_name + "\s*=\s*(['\"])((?:(?!\3).))\3)?[^<])(.*?)" Pleas not another answer "use existing xml parser". Question is - are the regexes proper or not? This is specific situation - C language in embedded system and xml is not well-formed (cannot be fixed - does not depend on me). Xml have specified schema and no problem with namespaces etc. exists.

    Read the article

  • Crystal Reports XSD file required?

    - by user270370
    Hi, I am trying to create a new report using the PUSH MODEL to supply data. I am creating a DataTable in my C# code and pushing this to a report using a template. I have created a report template and an XSD (using DataSet.WriteXmlSchema) and added this to my report using the Database Expert option. I have since deleted the xsd schema but the report still seems to be working. I was wondering why this was happening. Is the xsd file stored in the report? Thanks a lot. Ravi

    Read the article

  • Selecting a sequence NEXTVAL for multiple rows

    - by stringpoet
    I am building a SQL Server job to pull data from SQL Server into an Oracle database through a linked server. The table I need to populate has a sequence for the name ID, which is my primary key. I'm having trouble figuring out a way to do this simply, without some lengthy code. Here's what I have so far for the SELECT portion (some actual names obfuscated): SELECT (SELECT NEXTVAL FROM OPENQUERY(MYSERVER, 'SELECT ORCL.NAME_SEQNO.NEXTVAL FROM DUAL')), psn.BirthDate, psn.FirstName, psn.MiddleName, psn.LastName, c.REGION_CODE FROM Person psn LEFT JOIN MYSERVER..ORCL.COUNTRY c ON c.COUNTRY_CODE = psn.Country MYSERVER is the linked Oracle server, ORCL is obviously the schema. Person is a local table on the SQL Server database where the query is being executed. When I run this query, I get the same exact value for all records for the NEXTVAL. What I need is for it to generate a new value for each returned record. I found this similar question, with its answers, but am unsure how to apply it to my case (if even possible): Query several NEXTVAL from sequence in one satement

    Read the article

  • EntityFramework: using association or add property manualy

    - by dritterweg
    I'm starting to use Entity Framework. Let's say I have to Entity from my tables in DB. Here is the table schema Profiles ProfileId FirstName LastName Hobbies Id HobbyName OwnerId So one profile can have many hobbies. My Entity Framework: ProfileEntity ProfileId FirstName LastName Hobbies (collection of HobbyEntity) note: this created by the Association tool HobbyEntity Id HobbyName Owner (type of ProfileEntity) note: this created by the Association tool, for me this property is not important my question: should I use the "Association" tool to make the relationship between the two entities, which in result create a property of each entity (in ProfileEntity will create a HobbyEntity and vica versa) or should I not use the association and only add a scalar property manually such as List<HobbyEntity> in my ProfileEntity and OwnerId in HobbyEntity.

    Read the article

  • how do I integrate the aspnet_users table that comes with asp.net membership into my existing databa

    - by ooo
    i have a database that already has a users table COLUMNS: userID - int loginName - string First - string Last - string i just installed the asp.net membership table. Right now all of my tables are joined into my users table foreign keyed into the "userId" field How do i integrate asp.net_users table into my schema? here are the ideas i thought of: Add a membership_id field to my users table and on new inserts, include that new field in my users table. This seems like the cleanest way as i dont need to break any existing relationships. break all existing relationship and move all of the fields in my user table into the asp.net_users table. This seems like a pain but ultimately will lead to the most simple, normalized solution any thoughts?

    Read the article

  • basic database design table on rails

    - by runcode
    I am confuse on a concept. I am doing this on rails. Is that Entity set equal to a table in the database? Is that Relationship set equal to a table in the database? Let say we have Entity set "USER" and Entity set "POST" and Entity set "COMMENT" User- can post many posts and comments as they want Post- belong to users Comments-belong to posts ,users, so comment is weak entity. SCHEMA ====== USER -id -name POST -id -user_id(FK) -comment_id (FK) COMMENT -id -user_id (FK) -post_id (FK) so USER,POST,COMMENT are tables I think. And what else is a table? And do I need a table for the relationship??

    Read the article

  • Google App Engine datastore-primary key

    - by megala
    Hi, I've created a table in the Google App Engine Datastore. It contains the following FIELDS(GROUPNAME,GROUPID,GROUPDESC). How do I set GROUPID as the primary key? My code is as follows: @Entity @Table(name="group" , schema="PUBLIC") public class Creategroup { @Basic private String groupname; @Basic private String groupid; @Basic private String groupdesc; public void setGroupname(String groupname) { this.groupname = groupname; } public String getGroupname() { return groupname; } public void setGroupid(String groupid) { this.groupid = groupid; } public String getGroupid() { return groupid; } public void setGroupdesc(String groupdesc) { this.groupdesc = groupdesc; } public String getGroupdesc() { return groupdesc; } public Creategroup(String groupname, String groupid, String groupdesc ) { // TODO Auto-generated constructor stub this.groupname = groupname; this.groupid = groupid; this.groupdesc = groupdesc; } }

    Read the article

  • how do I integrate the aspnet_users table (asp.net membership) into my existing database

    - by ooo
    i have a database that already has a users table COLUMNS: userID - int loginName - string First - string Last - string i just installed the asp.net membership table. Right now all of my tables are joined into my users table foreign keyed into the "userId" field How do i integrate asp.net_users table into my schema? here are the ideas i thought of: Add a membership_id field to my users table and on new inserts, include that new field in my users table. This seems like the cleanest way as i dont need to break any existing relationships. break all existing relationship and move all of the fields in my user table into the asp.net_users table. This seems like a pain but ultimately will lead to the most simple, normalized solution any thoughts?

    Read the article

  • T-SQL - Is there a (free) way to compare data in two tables?

    - by RPM1984
    Okay so i have table a and table b. (SQL Server 2008) Both tables have the exact same schema. For the purposes of this question, consider table a = my local dev table, table b = the live table. I need to create a SQL script (containing UPDATE/DELETE/INSERT statements) that will update table b to be the same as table a. This script will then be deployed to the live database. Any free tools out there that can do this, or better yet a way i can do it myself? I'm thinking i probably need to do some type of a join on all the fields in the tables, then generate dynamic sql based on that. Anyone have any ideas?

    Read the article

  • Multiple databases support in Symfony

    - by Ngu Soon Hui
    I am using Propel as my DAL for my Symfony project. I can't seem to get my application to work across two or more databases. Here's my schema.yml: db1: lkp_User: pk_User: { type: integer, required: true, primaryKey: true, autoIncrement: true } UserName: { type: varchar(45), required: true } Password: longvarchar _uniques: Unique: [ UserName ] db2: tesco: Id: { type: integer, required: true, primaryKey: true, autoIncrement: true } Name: { type: varchar(45), required: true } Description: longvarchar And here's the databases.yml: dev: db1: param: classname: DebugPDO test: db1: param: classname: DebugPDO all: db1: class: sfPropelDatabase param: classname: PropelPDO dsn: 'mysql:dbname=bpodb;host=localhost' #where the db is located username: root password: #pass encoding: utf8 persistent: true pooling: true db2: class: sfPropelDatabase param: classname: PropelPDO dsn: 'mysql:dbname=mystore2;host=localhost' #where the db is located username: root password: #pass encoding: utf8 persistent: true pooling: true When I call php symfony propel-build-model, only db1 is generated, db2 is not. Any idea how to fix this problem?

    Read the article

  • Doctrine: Mixing YAML markup and db manager (navicat) editing?

    - by ropstah
    I think the answer to this question should be: No. However I hope to be corrected. I'd like to edit our database using a mixture of YAML markup + Doctrine createTables() and Navicat editing. Can I maintain the inheritance which is marked up? Example (4 steps, at step 4, Doctrine is in no way able to re-create the inheritance schema... or is it?): Step 1: Create YAML with inheritance --- Entity: columns: username: string(20) password: string(16) created_at: timestamp updated_at: timestamp User: inheritance: extends: Entity type: column_aggregation keyField: type keyValue: 1 Group: inheritance: extends: Entity type: column_aggregation keyField: type keyValue: 2 Step 2: Create tables using Doctrine (and drop/create db if nessecary) Created sql: CREATE TABLE entity (id BIGINT AUTO_INCREMENT, username VARCHAR(20), password VARCHAR(16), created_at DATETIME, updated_at DATETIME, type VARCHAR(255), PRIMARY KEY(id)) ENGINE = INNODB Step 3: Edit table using Navicat Step 4: Refresh YAML file because of 'external' edits...

    Read the article

  • Secure xml messages being read from database into app.

    - by scope-creep
    I have an app that reads xml from a database using NHibernate Dal. The dal calls stored procedures to read and encapsulate the data from the schema into an xml message, wrap it up to a message and enqueue it on an internal queue for processing. I would to secure the channel from the database reads to the dequeue action. What would be the best way to do it. I was thinking of signing the xml using System.Security.Cryptography.Xml namespace, but is their any other techniques or approaches I need to know about? Any help would be appreciated. Bob.

    Read the article

  • How to use SynonymFilterFactory in Solr?

    - by AlxVallejo
    I'm trying to execute synonym filtering at query time so that if I search for X, results for Y also show up. I go to where Solr is being run, edit the .txt file and add X, Y on a new line. This does not work. I check the schema and I see: <analyzer type="query"> <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true" /> What am I missing? EDIT Assessing configuration files tomcat6/Catalina/localhost seems to point to the correct location <Context docBase="/data/solr/solr.war" debug="0" privileged="true" allowLinking="true" crossContext="true"> <Environment name="solr/home" type="java.lang.String" value="/data/solr" override="true" /> </Context> Also, in the Solr admin I see this. What does cwd mean? cwd=/usr/share/tomcat6 SolrHome=/data/solr/

    Read the article

  • Restricting deletion with NHibernate

    - by FrontSvin
    I'm using NHibernate (fluent) to access an old third-party database with a bunch of tables, that are not related in any explicit way. That is a child tables does have parentID columns which contains the primary key of the parent table, but there are no foreign key relations ensuring these relations. Ideally I would like to add some foreign keys, but cannot touch the database schema. My application works fine, but I would really like impose a referential integrity rule that would prohibit deletion of parent objects if they have children, e.i. something similar 'ON DELETE RESTRICT' but maintained by NHibernate. Any ideas on how to approach this would be appreciated. Should I look into the OnDelete() method on the IInterceptor interface, or are there other ways to solve this? Of course any solution will come with a performance penalty, but I can live with that.

    Read the article

  • Exchange 2013 really slow outside of localhost

    - by ItsJustJP
    We've got a 12 core xeon, 24GB of ram 2012 server. We've recently migrated from exchange 2010 (which was on another server) to exchange 2013 which resides on our new 12 core server. Accessing the OWA on the exchange server is fine; it's very quick and responsive however accessing it via any other computer connect to the domain via a 1 gpbs connection and it'll take 10-15 seconds to load. Also running slow is public calenders that people in my place need to access, again taking 10-15 seconds to access and can sometimes cause outlook to not respond. Further to that we have phones that connect via the internet (of course) to the exchange so people can get work emails when they are out of the office. Guess what, this is also running slow. I've have search for many solutions and have tried changing outlook authentication methods but there is no change in speed. The old exchange 2010 server no longer exists but there was no problem before the migration. Has anyone got any suggestions? Thanks :) Must also mention that server 2012 that exchange 2013 is installed on is also the DC. Update: It would appear that any connection via https is slow. It took more than 15 mins for an outlook client to download 50MB of emails (outlook anywhere).

    Read the article

  • Some way of getting just the data part from a web service returning a DataTable?

    - by simendsjo
    I'm using an external web service that returns a DataTable. There's a couple of problems with this. The first is that there are over 20,000 columns, and .Net crashes when trying to convert the xml to a DataTable. I have to use Mono to make this work. The other problem is that it takes a lot of time to fetch all the column information. The columns "never" changes (and when they do, I know about it in advance). I would like to just fetch the data part, and load the schema from a local file. I very much doubt this is possible, but I thought I could ask here just in case.

    Read the article

  • No difference between nullable:true and nullable:false in Grails 1.3.6?

    - by knorv
    The following domain model definition .. class Test { String a String b static mapping = { version(false) table("test_table") a(nullable: false) b(nullable: true) } } .. yields the following MySQL schema .. CREATE TABLE test_table ( id bigint(20) NOT NULL AUTO_INCREMENT, a varchar(255) NOT NULL, b varchar(255) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Please note that a and b get identical MySQL column definitions despite the fact a is defined as non-nullable and b is nullable in the GORM mappings. What am I doing wrong? I'm running Grails 1.3.6.

    Read the article

  • Things to consider when building a continuous integration server?

    - by Dave
    I'm new to continuous integration, but immediately realize its value, and I want to get one set up right away. I have played with TeamCity and have it working in a VM great. Now, I don't want to spend money on another system, so I was planning on just doing the VM again on a faster machine (i.e. my dev system). There are a few questions that come to mind with this: Hard disk allocation - how big should it be? Sure, 60GB seems like more than enough, but people also used to think that we'd never need more than 64KB of RAM Backups - is it even important to back up the integration server? Sure, I guess it's nice so that one doesn't have to go through the entire configuration process again, but I would think that's about it. I could snapshot my VM every time I do a configuration change, and then do a backup of applications only (ignore the buildAgent stuff). Migration - if I want to go away from a VM on my dev system, to a new server, which maybe even runs Windows Server 2003, is it easy enough? Perhaps this is a particular point best suited for StackOverflow.

    Read the article

  • Rails: id field is nil when calling Model.new

    - by Joe Cannatti
    I am a little confused about the auto-increment id field in rails. I have a rails project with a simple schema. When i check the development.sqlite3 I can see that all of my tables have an id field with auto increment. CREATE TABLE "messages" ("id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, "text" text, "created_at" datetime, "updated_at" datetime); but when i call Message.new on the console, the resulting object has an id of nil >> a = Message.new => #<Message id: nil, text: nil, created_at: nil, updated_at: nil> shouldn't the id come back populated?

    Read the article

  • What is the best way to restore(rollback) data in an application to a specified state(date) ?

    - by panzerschreck
    Hello, An example would set the context right, the example below captures the various states of the entity, which needs to be reverted(rolled back) . State 1 - Recorded on 01-Mar-2010 Column1 Column2 Data1 0.56 State 2 - Recorded on 02-Mar-2010 Column1 Column2 Data1 0.57 State 3 - Recorded on 03-Mar-2010 Column1 Column2 Data1 0.58 User notices that state3 is not what he intended to be in, decides to revert back to state2. One approach that I can think of, without modifying the entity is via "auditing" all the inserts/updates, as below, the rollback information captures the data just before the updates/modifications on the entity, so that it can be applied in an order when you need to revert.Please note that changing the entity's schema, is not an option. Rollback - R1 recorded on 01-Mar-2010 Column1 Column2 Data1 0.56 Rollback - R2 Recorded on 02-Mar-2010 Column1 Column2 Data1 0.56 Rollback - R3 Recorded on 03-Mar-2010 Column1 Column2 Data1 0.57 So, to get to state2 , we would start with rollback information R1,apply R2 onto it. Is there a better approach to achieve this ? Thanks for your time.

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >