Search Results

Search found 47870 results on 1915 pages for 'add column'.

Page 443/1915 | < Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >

  • Will MyISAM type tables work better than InnoDB for large numbers of columns?

    - by Ethan
    I have a MySQL InnoDB table with 238 columns. 56 of them are TEXT type, 27 are VARCHAR(255). I am getting MySQL error 139 when users insert data sometimes. After research I found that I'm probably running into InnoDB row size/column size/column count limitations. (I'm putting it that way because the specific limits among those three things are interdependent.) Docs on InnoDB give an idea of the limits. If I switch this table to MyISAM is it likely to solve the problem? I understand the maximum row size of 65,535 bytes. I think I'm hitting InnoDB's additional 8000 byte limit somehow. Switching to PostgreSQL is also a remote option, but would take much longer.

    Read the article

  • Storing multiple checkbox values in database

    - by Madjokr
    Hi, I want to store multiple column values in table.Lets take a example .. What are your favourite colors? the choices can be red,blue,green, orange. So lets assume, the user selects atleast 2 values. Is there any way to store the multiple values in table. I have implemented by concatinating choices of users in a column in the table. I later found that it is a bad practise. Currently i can think of using Bitwise operator and habtm. What are the different ways for storing multiple choices values in table? If I am implementing in rails, What is the best way to implement this with OOP concepts? Is there any builtin options in rails?

    Read the article

  • MySQL: Count two things in one query?

    - by Nebs
    I have a "boolean" column in one of my tables (value is either 0 or 1). I need to get two counts: The number of rows that have the boolean set to 0 and the number of rows that have it set to 1. Currently I have two queries: One to count the 1's and the other to count the 0's. Is MySQL traversing the entire table when counting rows with a WHERE condition? I'm wondering if there's a single query that would allow two counters based on different conditions? Or is there a way to get the total count along side the WHERE conditioned count? This would be enough as I'd only have to subtract one count from the other (due to the boolean nature of the column). There are no NULL values. Thanks.

    Read the article

  • Get a list of documents from Mongo DB

    - by Andrei Neagu
    Hello, I want to do something like this: List<int> fff = new List<int>(); fff.Add(1); fff.Add(2); fff.Add(5); Mongo m = new Mongo(); m.Connect(); var dataBase = m.GetDatabase("database"); var collection = dataBase.GetCollection("coll"); IMongoQuery queryable = collection.AsQueryable(); MongoQueryProvider prov = new MongoQueryProvider(collection); var query = new MongoQuery(prov); var ffppp = from p221 in query where fff.Contains((int)p221["oid"]) select p221; This throws this error : The method 'Contains' could not be converted into a constant. I saw that mongo has an operator $in. Does any one know how can I use it from c#? (http://www.mongodb.org/display/DOCS/Advanced+Queries) Thanks

    Read the article

  • Maintaining Project with Git

    - by gkrdvl
    Hi All, I have 2 project, and actually these 2 project is about 80% same each other, the mainly difference is just about language and business model, one is for larger audience using english language and have a 9$/month business model, another is using local language with freemium business model. Sometime when I want to add new feature/functionality, I want to add it in both of the project, but also sometime I want to add feature especially just for the local project. My question is, how do I maintain these 2 project with git ? Maintain 2 git repository for each project or Maintain single git repository with 2 mainly branch or Any other suggestion ?

    Read the article

  • ASP .net MVC Jqgrid data binding

    - by SARAVAN
    Hi, I am using a jqgrid with a column named 'Comments'. My controller code returns data as follows: var jsonData = new { rows= .... .... select new { col1.... col2.... Comments = _Model.GetComments(id), }) ....... ..... return Json(jsonData, JsonRequestBehavior.AllowGet); } _Model.GetComments(id) will return a ClientComments Object which has a few properties say CommentID, FirstName, MiddleName etc., which will be bound to each row in the grid Now in my jqgrid I need to build a tool tip based on Comments column properties and for that I need to use the properties of my Comments in JQGrid for each row. May I know How I can manipulate Comment's properties for each row? Any help would be appreciated. I tried in my javascript that for each row rowObject.Comments.FirstName and it did not work.

    Read the article

  • Slickgrid, confirm before edit

    - by Evan
    I am making a slick grid and need the ability to add rows with a code column. For this to be possible the code column needs to be editable for the entire table. I am trying to work out a way that I can confirm the edit with a standard javascript confirm popup box. I tried putting one into the onedit event within the slickgrid constructor and that executed after the edit. I am led to believe that the edit function is independent from calling the edit stored procedure of the database. Is there a better way to go about this? RF_tagsTable = new IndustrialSlickGrid( LadlesContainerSG , { URL: Global.DALServiceURL + "CallProcedure" , DatabaseParameters: { Procedure: "dbo.TableRF_tags" , ConnectionStringName: Global.ConnectionStringNames.LadleTracker } , Title: "RF tags" , Attributes: { AllowDelete: true , defaultColumnWidth: 120 , editable: true , enableAddRow: true , enableCellNavigation: true , enableColumnReorder: false , rowHeight: 25 , autoHeight: true , autoEdit: false , forceFitColumns: true } , Events: { onRowEdited : rowEdited /*function(){ //this is my failed attempt var r=confirm("Edit an existing tag?") if (r){ alert(r); } else { alert(r); } }*/ , onRowAdded : rowAdded } });

    Read the article

  • nHibernate one-to-many inserts but doesnt update

    - by user210713
    Instead of getting into code, I have a simple question. Default behavior for a simple one-to-many is that it inserts the child record then updates the foreign key column with the parent key. Has anyone ever had a one-to-many where the child object gets inserted but not updated resulting in a row in my table with a null in the foreign key column? I want the default behaviour for a standard one-to-many. I don't want to have to add the parent as a property to the child. Thanks.

    Read the article

  • Image Processing, joining the small images to form the main image

    - by n0idea
    Good morning everyone, Actually I'm having a small issue in image processing and I'm in need of some help. First of all, let me explain what i want to do, i have an image that was split into 4 other small images. I currently have like 6 small images that i need to figure out which ones are part of the real image. Second, what i currently know is that that i should compare these images edges or last column with the first column of the other image. I'm not sure yet what exactly should be done, anyone is able to put me on the same tracks, with some detailed hints and how to compare the edges of 2 images. Some links and example codes will be help full. One more thing, how am i able to read .Raw images using java, c# or python ?

    Read the article

  • Safe HttpContext.Current.Cache Usage

    - by Burak SARICA
    Hello there, I use Cache in a web service method like this : var pblDataList = (List<blabla>)HttpContext.Current.Cache.Get("pblDataList"); if (pblDataList == null) { var PBLData = dc.ExecuteQuery<blabla>( @"SELECT blabla"); pblDataList = PBLData.ToList(); HttpContext.Current.Cache.Add("pblDataList", pblDataList, null, DateTime.Now.Add(new TimeSpan(0, 0, 15)), Cache.NoSlidingExpiration, CacheItemPriority.Normal, null); } I wonder is it thread safe? I mean the method is called by multiple requesters And more then one requester may hit the second line at the same time while the cache is empty. So all of these requesters will retrieve the data and add to cache. The query takes 5-8 seconds. May a surrounding lock statement around this code prevent that action? (I know multiple queries will not cause error but i want to be sure running just one query.)

    Read the article

  • Custom Expression in Linq-to-Sql Designer

    - by csharpnoob
    According to Microsoft: http://msdn.microsoft.com/de-de/library/system.data.linq.mapping.columnattribute.expression.aspx It's possible to add expression to the Linq-to-SQL Mapping. But how to configure or add them in Visual Studio in the Designer? Problem, when I add it manual to thex XYZ.designer.cs it on change it will be lost. //------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a tool. // Runtime Version:2.0.50727.4927 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ This is generated: [Column(Name="id", Storage="_id", DbType="Int")] public System.Nullable<int> id { ... But i need something like this [Column(Name="id", Storage="_id", DbType="Int", Expression="Max(id)")] public System.Nullable<int> id { ... Thanks.

    Read the article

  • In SQL, in what situation do we want to Index a field in a table, or 2 fields in a table at the same

    - by Jian Lin
    In SQL, it is obvious that whenever we want to do a search on millions of record, say CustomerID in a Transactios table, then we want to add an index for CustomerID. Is another situation we want to add an index to a field when we need to do inner join or outer join using that field as a criteria? Such as Inner join on t1.custumerID = t2.customerID. Then if we don't have an index on customerID on both tables, we are looking at O(n^2) because we need to loop through the 2 tables sequentially. If we have index on customerID on both tables, then it becomes O( (log n) ^ 2 ) and it is much faster. Any other situation where we want to add an index to a field in a table? What about adding index for 2 fields combined in a table. That is, one index, for 2 fields together?

    Read the article

  • Common views in viewControllers - Code re-usability

    - by Nitish
    I have few common views in most of my viewControllers. What I noticed is that I can reuse single code for all viewControllers which is absolutely wise. For this I decided to create a class Utils which has static methods like +(void)createCommonViews:(float)yAxis:(NSString*)text; In my case common views are three labels and two images. Problem : I am not able to add these views from Utils. I am wondering how can I send self as a parameter so that I may add the views from Utils. It may be wrong to add views outside the viewController. In that case what can be the solution? Taking all these views in a UIView, setting return type of Utils method as UIView and then adding UIView to viewController(after calling method from viewController) might solve my problem. But what I am looking for is some other solution. Thanks, Nitish

    Read the article

  • Is it possible to partition more than one way at a time in SQL Server?

    - by meeting_overload
    I'm considering various ways to partition my data in SQL Server. One approach I'm looking at is to partition a particular huge table into 8 partitions, then within each of these partitions to partition on a different partition column. Is this even possible in SQL Server, or am I limited to definining one parition column+function+scheme per table? I'm interested in the more general answer, but this strategy is one I'm considering for Distributed Partitioned View, where I'd partition the data under the first scheme using DPV to distribute the huge amount of data over 8 machines, and then on each machine partition that portion of the full table on another parition key in order to be able to drop (for example) sub-paritions as required.

    Read the article

  • Fastest way to display a full calendar

    - by Aurélien Ribon
    Hello, I need to display a complete calendar (12 months, 31~ days/month) on screen. Currently, I'm using a 12-column grid, with each column filled with a "months" stackpanel. Each "month" stackpanel is filled with 31 (or less) day representations. Each day representation is composed of a DockPanel embedding three controls : a textblock to display the day letter a textblock to display the day number a textblock to display a short message Of course, performances are crushed down when I try to resize the window. Is there a useful trick to allow a fast display of many textblocks ?

    Read the article

  • Passing parameters to a JQuery function.

    - by jmpena
    Hello, ive a problem using JQuery.. Im creating a HTML with a loop and it has a column for Action, that column is a HyperLink that when the user click the link call a JavaScript function and pass the parameters... example: <a href="#" OnClick="DoAction(1,'Jose');" > Click </a> <a href="#" OnClick="DoAction(2,'Juan');" > Click </a> <a href="#" OnClick="DoAction(3,'Pedro');" > Click </a> ... <a href="#" OnClick="DoAction(n,'xxx');" > Click </a> i want that function to Call an Ajax JQuery function with the correct parameters. any help ??

    Read the article

  • NHibernate bag is always null

    - by Neville Chinan
    I have set up my mapping file and classes as suggested by many articles class A { ... IList BBag {get;set;} ... } class B { ... A aObject {get;set;} ... } <class name="A">...<bag name="BBag" table="B" inverse="true" lazy="false"><key column="A_ID" /><one-to-many class="B" /></bag>... <class name="B">...<many-to-one name="aObject" class="A" column="A_ID" />... I added a set of A's to the A table and a set of B's to the B table, all the data is stored as expected. However if I try and access aInstance.BBag.Count I get a null reference exception. I think I missing some key knowledge on how an bag gets instantiated. Thanks

    Read the article

  • "ref" vs "out" keyword errors using the sql-net and C# Sqlite combination for windows phone 7.1

    - by param
    I am getting some "ref" vs "out" keyword errors using the sql-net and C# Sqlite combination for windows phone 7.1. Is this due to a wrong combination of libraries that I am using? App Type: Windows Phone 7.1 Using: 1) sql-net Version 1.0.5, Source Nuget thru Visual Studio 2) C# Sqlite for WP7 (wp7sqlite) ( Community.CSharpSqlite.WP7) Version 0.1.1, Source Nuget thru Visual Studio. The exact error I receive is below **Error 5 The best overloaded method match for Community.CsharpSqlite.Sqlite3.sqlite3_open(string, ref Community.CsharpSqlite.Sqlite3.sqlite3)' has some invalid arguments C:\Dev\Learning\SQLite.cs Line:2492 Column: 29 ** The next error then hints that it is related to the parameter being passed as "out" type instead of "ref" type. Error 6 Argument 2 must be passed with the 'ref' keyword C:\Dev\Learning\SQLite.cs Line: 2492 Column: 64 I can make the compile errors go away by replacing the "out" keyword with the "ref" keyword, but that is likely to lead to other issues. Given that I do not see much complain about this issue - I may be doing something wrong but not able to detect easily. Thanks, Parmeshwar

    Read the article

  • Simple bad merge scenario in mercurial

    - by user281180
    I have created a repository AAA and another BBB. In AAA I have created a file A with the values a1, a2, a3 and commit In BBB I have created a file B with the values b1, b2, b3, commit and export a bundle. I add the bundle in AAA and merge. I make a change in B, and write b33 in AAA and another change in B and write b23 in BBB. and commit both. I create bundle of BBB and add the bundle in AAA. I do a merge. Now I decide to revert to the revert to step 2. I no more want to have the merge of 4. changes done in B as they were bad merges. Now I want to add the bundle of 3 but I can see that it can`t see any changes anymore. Why? How can I do the merge once more?

    Read the article

  • Phpmyadmin mysql foreign key problem

    - by alan
    Hey guys i'm using phpmyadmin (php & mysql) and i'm having alot of trouble linking the tables using foreign keys. I'm getting negative values for the field countyId (which is the foriegn key). However it is linking to my other table fine and it's cascading fine. So when I go to add data there will be a drop box for the CountyId and the values will look something like this, " -1 1- " Here is my alter statement, ALTER TABLE Baronies ADD FOREIGN KEY (CountyId) REFERENCES Counties (CountyId) ON DELETE CASCADE

    Read the article

  • What can I do about a SQL Server ghost FK constraint?

    - by rcook8601
    I'm having some trouble with a SQL Server 2005 database that seems like it's keeping a ghost constraint around. I've got a script that drops the constraint in question, does some work, and then re-adds the same constraint. Normally, it works fine. Now, however, it can't re-add the constraint because the database says that it already exists, even though the drop worked fine! Here are the queries I'm working with: alter table individual drop constraint INDIVIDUAL_EMP_FK ALTER TABLE INDIVIDUAL ADD CONSTRAINT INDIVIDUAL_EMP_FK FOREIGN KEY (EMPLOYEE_ID) REFERENCES EMPLOYEE After the constraint is dropped, I've made sure that the object really is gone by using the following queries: select object_id('INDIVIDUAL_EMP_FK') select * from sys.foreign_keys where name like 'individual%' Both return no results (or null), but when I try to add the query again, I get: The ALTER TABLE statement conflicted with the FOREIGN KEY constraint "INDIVIDUAL_EMP_FK". Trying to drop it gets me a message that it doesn't exist. Any ideas?

    Read the article

  • BULK INSERT with inconsitant number of columns

    - by aceinthehole
    I am trying to load a large amount data in SQL server from a flat file using BULK INSERT. However, my file has varying number of column, for instance the first row contains 14 and the second contains 4. That is OK, I just want to make a table with the max number of columns and load the file into it with nulls for the missing columns. I can play with it from that point. But it seems that SQL Server when reaching the end of the line and having more columns to fill for that same row in the destination table, just moves on to the next line and attempts to put the data on that line to the wrong column of the table. Is there a way to get the behavior that I am looking for? Is there a table hint that I can use the specify this? Has anyone run into this before? Here is the code BULK INSERT #t FROM '<path to file>' WITH ( DATAFILETYPE = 'char', KEEPNULLS, FIELDTERMINATOR = '#' )

    Read the article

  • Adding autocomplete options to auctex C-c C-e?

    - by Seamus
    When I'm using auctex with emacs to write LaTeX documents, I would like to be able to add a couple more options to the list of environment types that auctex "recognises" and can autocomplete, namely Theorem, Lemma, Proof, itemize* and a couple of others. Which variable to I need to edit? I have played around in customize-apropos LaTeX and auctex, but I haven't found it. (lisp code snippet to add to my .emacs would be preferred, I don't quite understand the syntax yete)

    Read the article

  • How to prevent the "other user" from appearing on the logon screen of a server?

    - by user114106
    When I want to log into a server console (Server 2008 R2), I click Ctrl+Alt+Del and get "Other User" that I need to click before I get the log in box to add my credentials. This wouldnt be so bad, but I want to use this server as a Citrix server and so far every user that tries to connect has to click other user before they can add thier own credentials.. Has anyone got any ideas on ewhat I can do to get this to go straight to the username and password without the extra click?

    Read the article

  • Duplicate partitioning key performance impact

    - by Anshul
    I've read in some posts that having duplicate partitioning key can have a performance impact. I've two tables like: CREATE TABLE "Test1" ( CREATE TABLE "Test2" ( key text, key text, column1 text, name text, value text, age text, PRIMARY KEY (key, column1) ... ) PRIMARY KEY (key, name,age) ) In Test1 column1 will contain column name and value will contain its corresponding value.The main advantage of Test1 is that I can add any number of column/value pairs to it without altering the table by just providing same partitioning key each time. Now my question is how will each of these table schema's impact the read/write performance if I've millions of rows and number of columns can be upto 50 in each row. How will it impact the compaction/repair time if I'm writing duplicate entries frequently?

    Read the article

< Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >