Search Results

Search found 6532 results on 262 pages for 'computed columns'.

Page 88/262 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • I want to get 2 values returned by my query. How to do, using linq-to-entity

    - by Shantanu Gupta
    var dept_list = (from map in DtMapGuestDepartment.AsEnumerable() where map.Field<Nullable<long>>("GUEST_ID") == DRowGuestPI.Field<Nullable<long>>("PK_GUEST_ID") join dept in DtDepartment.AsEnumerable() on map.Field<Nullable<long>>("DEPARTMENT_ID") equals dept.Field<Nullable<long>>("DEPARTMENT_ID") select new { dept_id=dept.Field<long>("DEPARTMENT_ID") ,dept_name=dept.Field<long>("DEPARTMENT_NAME") }).Distinct(); DataTable dt = new DataTable(); dt.Columns.Add("DEPARTMENT_ID"); dt.Columns.Add("DEPARTMENT_NAME"); foreach (long? dept_ in dept_list) { dt.Rows.Add(dept_[0], dept_[1]); } EDIT In the previous question asked by me. I got an answer like this for single value. What is the difference between the two ? foreach (long? dept in dept_list) { dt.Rows.Add(dept); }

    Read the article

  • Avoid the use of loops (for) with R

    - by albergali
    Hi, I'm working with R and I have a code like this: i<-1 j<-1 for (i in 1:10) for (j in 1:100) if (data[i] == paths[j,1]) cluster[i,4] <- paths[j,2] where : data is a vector with 100 rows and 1 column paths is a matrix with 100 rows and 5 columns cluster is a matrix with 100 rows and 5 columns My question is: how could I avoid the use of "for" loops to iterate through the matrix? I don't know whether apply functions (lapply, tapply...) are useful in this case. This is a problem when j=10000 for example, because execution time is very long. Thank you

    Read the article

  • Fulltext and composite indexes and how they affect the query

    - by Brett
    Just say I had a query as below.. SELECT name,category,address,city,state FROM table WHERE MATCH(name,subcategory,category,tag1) AGAINST('education') AND city='Oakland' AND state='CA' LIMIT 0, 10; ..and I had a fulltext index as name,subcategory,category,tag1 and a composite index as city,state; is this good enough for this query? Just wondering if something extra is needed when mixing additional AND's when making use of the fulltext index with the MATCH/AGAINST. Edit: What I am trying to understand is, what happens with the additional columns that are within the query but are not indexed in the chosen index (the fulltext index), the above example being city and state. How does MySQL now find the matching rows for these since it can't use two indexes (or can it?) - so, basically, I'm trying to understand how MySQL goes about finding the data optimally for the columns NOT in the chosen fulltext index and if there is anything I can or should do to optimize the query.

    Read the article

  • NHibernate and Composite Key References

    - by Rich
    I have a weird situation. I have three entities, Company, Employee, Plan and Participation (in retirement plan). Company PK: Company ID Plan PK: Company ID, Plan ID Employee PK: Company ID, SSN, Employee ID Participation PK: Company ID, SSN, Plan ID The problem is in linking the employee to the participation. From a DB perspective, participation should have Employee ID in the PK (it's not even in table). But it doesn't. NHibernate won't let me map the "has many" because the link expects 3 columns (since Employee PK has 3 columns), but I'd only provide 2. Any ideas on how to do this?

    Read the article

  • WxPython multiple grid instances

    - by randomPythonHacker
    Does anybody know how I can get multiple instances of the same grid to display on one frame? Whenever I create more than 1 instance of the same object, the display of the original grid widget completely collapses and I'm left unable to do anything with it. For reference, here's the code: import wx import wx.grid as gridlib class levelGrid(gridlib.Grid): def __init__(self, parent, rows, columns): gridlib.Grid.__init__(self, parent, -1) self.moveTo = None self.CreateGrid(rows, columns) self.SetDefaultColSize(32) self.SetDefaultRowSize(32) self.SetColLabelSize(0) self.SetRowLabelSize(0) self.SetDefaultCellBackgroundColour(wx.BLACK) self.EnableDragGridSize(False) class mainFrame(wx.Frame): def __init__(self, parent, id, title): wx.Frame.__init__(self, parent, id, title, size=(768, 576)) editor = levelGrid(self, 25, 25) panel1 = wx.Panel(editor, -1) #vbox = wx.BoxSizer(wx.VERTICAL) #vbox.Add(editor, 1, wx.EXPAND | wx.ALL, 5) #selector = levelGrid(self, 1, 25) #vbox.Add(selector, 1, wx.EXPAND |wx.BOTTOM, 5) self.Centre() self.Show(True) app = wx.App() mainFrame(None, -1, "SLAE") app.MainLoop()

    Read the article

  • TSQL, select values from large many-to-many relationship

    - by eugeneK
    I have two tables Publishers and Campaigns, both have similar many-to-many relationships with Countries,Regions,Languages and Categories. more info Publisher2Categories has publisherID and categoryID which are foreign keys to publisherID in Publishers and categoryID in Categories which are identity columns. On other side i have Campaigns2Categories with campaignID and categoryID columns which are foreign keys to campaignID in Campaigns and categoryID in Categories which again are identities. Same goes for Regions, Languages and Countries relationships I pass to query certain publisherID and want to get campaignIDs of Campaigns that have at least one equal to Publisher value from regions, countries, language or categories thanks

    Read the article

  • Trying to verify understanding of Foreign Keys MSSQL

    - by msarchet
    So I'm working on just a learning project to expose myself to doing some things I do not get to do at work. I'm just making a simple bug and case tracking app (I know there are a million this is just to work with some tools I don't get to). So I was designing my database and realized I've never actually used Foreign Keys before in any of my projects, I've used them before but never actually setting up a column as a FK. So I've designed my database as follows, which I think is close to correct (at least for the initial layout). However When I try to add the FK's to the linking Tables I get an error saying, "The tables present in the relationship must have the same number of columns". I'm doing this by in SQLSMS by going to the Keys 'folder' and adding a FK. Is there something that I am doing wrong here, I don't understand why the tables would have to have the same number of columns for me to add a FK relationship between the tables?

    Read the article

  • RadioButton checkedchanged event firing multiple times

    - by kash3
    Hi, I am trying to add multiple radiobutton columns to my gridview dynamically in the code and i want to implement some logic which involves database fetch in the checkedchanged event of radiobuttons but some how the checked changed event is being fired multiple times for each row. Following is the code: aspx: <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" BackColor="White" BorderColor="#CC9966" BorderStyle="None" EnableViewState="true" BorderWidth="1px" CellPadding="4" Font-Names="Verdana"> <FooterStyle BackColor="#FFFFCC" ForeColor="#330099" /> <Columns> <asp:TemplateField HeaderText="Select One"> <ItemTemplate> </ItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Select Two"> <ItemTemplate> </ItemTemplate> </asp:TemplateField> <asp:TemplateField> <ItemTemplate> <asp:Label ID="lblval" runat="server" Text="!" ForeColor="Red" Visible="false"/> </ItemTemplate> </asp:TemplateField> </Columns> **code behind** void GridView1_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.DataItem != null) { DataRowView dvRowview = (DataRowView)e.Row.DataItem; int currentRow = GridView1.Rows.Count; RadioButton rdoSelect1 = new RadioButton(); rdoSelect1.GroupName = "Select" + currentRow; rdoSelect1.ID = string.Concat("rdoSelect1", currentRow); rdoSelect1.AutoPostBack = true; rdoSelect1.CheckedChanged += new EventHandler(rdoSelect_CheckedChanged); e.Row.Cells[0].Controls.Add(rdoSelect1); RadioButton rdoSelect2 = new RadioButton(); rdoSelect2.GroupName = "Select" + currentRow; rdoSelect2.ID = string.Concat("rdoSelect2", currentRow); rdoSelect2.AutoPostBack = true; rdoSelect2.CheckedChanged += new EventHandler(rdoSelect_CheckedChanged); e.Row.Cells[1].Controls.Add(rdoSelect2); if (!IsPostBack) { e.Row.Cells[e.Row.Cells.Count - 1].Controls[1].Visible = false; if (e.Row.Cells[0] != null && Convert.ToBoolean(dvRowview["Select1"]) == true) rdoSelect1.Checked = true; else rdoSelect1.Checked = false; if (e.Row.Cells[0] != null && Convert.ToBoolean(dvRowview["Select2"]) == true) rdoSelect2.Checked = true; else rdoSelect2.Checked = false; } } } void rdoSelect_CheckedChanged(object sender, EventArgs e) { RadioButton rdoSelectedOption = (RadioButton)sender; GridViewRow selRow = rdoSelectedOption.NamingContainer as GridViewRow; if (rdoSelectedOption.Checked) selRow.Cells[selRow.Cells.Count - 1].Controls[1].Visible = true; else selRow.Cells[selRow.Cells.Count - 1].Controls[1].Visible = false; } i want the checkedchanged event to fire only once for a group name and row.

    Read the article

  • Determining Excel spreadsheet format before Data Flow Task

    - by Josh Larsen
    I'm working on an SSIS package which uses a for each loop to iterate through excel files in a directory and a data flow task to import them. The issue I'm having is that the project manager I'm working with doesn't think the users will always follow the structure. So if a file is in the folder and the package tries to import it but the spreadsheet is missing columns or has extra columns it generates and error of course. Even though I have the task set to not fail the package; the package does indeed fail and then the other files aren't imported. So, I'm wondering what is the easiest way to either determine the spreadsheet is incorrectly formatted, or stop the error from failing the package execution? After taking said step I would just use a file copy task to move the file to a "Failure" folder. Then continue on processing the spreadsheets.

    Read the article

  • mysql ignores not null constraint?

    - by Marga Keuvelaar
    I have created a table with NOT NULL constraints on some columns in MySQL. Then in PHP I wrote a script to insert data, with an insert query. When I omit one of the NOT NULL columns in this insert statement I would expect an error message from MySQL, and I would expect my script to fail. Instead, MySQL inserts empty strings in the NOT NULL fields. In other omitted fields the data is NULL, which is fine. Could someone tell me what I did wrong here? I'm using this table: CREATE TABLE IF NOT EXISTS tblCustomers ( cust_id int(11) NOT NULL AUTO_INCREMENT, custname varchar(50) NOT NULL, company varchar(50), phone varchar(50), email varchar(50) NOT NULL, country varchar(50) NOT NULL, ... date_added timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (cust_id) ) ; And this insert statement: $sql = "INSERT INTO tblCustomers (custname,company) VALUES ('".$customerName."','".$_POST["CustomerCompany"]."')"; $res = mysqli_query($mysqli, $sql);

    Read the article

  • How to pass record id into hyperlink in radgrid?

    - by mongoose_za
    I've a radgrid and rendering a hyperlink column. I want to pass the id of the record into the url for the hyperlink. How can I do this? I have this <Columns> <telerik:GridTemplateColumn AllowFiltering="false" HeaderText="Edit" UniqueName="Edit"> <ItemTemplate> <asp:HyperLink ID="HyperLink1" runat="server" Target="_blank" NavigateUrl="~/Edit.aspx?Id=need_to_bind_id_here">Edit Details</asp:HyperLink> </ItemTemplate> </telerik:GridTemplateColumn> </Columns> There is an ID column which is generated too.

    Read the article

  • Trigger an action to increment all rows of an int column which are greater than or equal to the inserted row

    - by Dev
    I am performing some insertion to an SQL table with has three columns and several rows of data The three columns are Id,Product,ProductOrder with the following data Id Product ProductOrder 1 Dell 1 2 HP 3 3 lenovo 2 4 Apple 10 Now, I would like a trigger which fires an action and increments all the ProductOrders by 1which are greater than or equal to the inserted ProductOrder. For example, I am inserting a record with Id=5 Product=Sony, ProductOrder=2 Then it should look for all the products with ProductOrder greater than or equal to 2 and increment them by 1. So, the resultant data in the SQL table should be as follows Id Product ProductOrder 1 Dell 1 2 HP 4 3 lenovo 3 4 Apple 11 5 Sony 2 From above we can see that ProductOrder which are equal or greater than the inserted are incremented by 1 like HP,Lenovo,Apple May I know a way to implement this?

    Read the article

  • replacing data.frame element-wise operations with data.table (that used rowname)

    - by Harold
    So lets say I have the following data.frames: df1 <- data.frame(y = 1:10, z = rnorm(10), row.names = letters[1:10]) df2 <- data.frame(y = c(rep(2, 5), rep(5, 5)), z = rnorm(10), row.names = letters[1:10]) And perhaps the "equivalent" data.tables: dt1 <- data.table(x = rownames(df1), df1, key = 'x') dt2 <- data.table(x = rownames(df2), df2, key = 'x') If I want to do element-wise operations between df1 and df2, they look something like dfRes <- df1 / df2 And rownames() is preserved: R> head(dfRes) y z a 0.5 3.1405463 b 1.0 1.2925200 c 1.5 1.4137930 d 2.0 -0.5532855 e 2.5 -0.0998303 f 1.2 -1.6236294 My poor understanding of data.table says the same operation should look like this: dtRes <- dt1[, !'x', with = F] / dt2[, !'x', with = F] dtRes[, x := dt1[,x,]] setkey(dtRes, x) (setkey optional) Is there a more data.table-esque way of doing this? As a slightly related aside, more generally, I would have other columns such as factors in each data.table and I would like to omit those columns while doing the element-wise operations, but still have them in the result. Does this make sense? Thanks!

    Read the article

  • How to extract a Sub-Matrix from a Matrix ?

    - by ZaZu
    Hello, I have a matrix in a txt file and I want to load the matrix based on my input of number of rows and columns For example, I have a 5 by 5 matrix in the file. I want to extract a 3 by 3 matrix, how can I do that ? I created a nested loop using : FILE *sample sample=fopen("randomfile.txt","r"); for(i=0;i<rows;i++){ for(j=0;j<cols;j++){ fscanf(sample,"%f",&matrix[i][j]); } fscanf(sample,"\n",&matrix[i][j]); } fclose(sample); Sadly the code does not work .. If I have this matrix : 5.00 4.00 5.00 6.00 5.00 4.00 3.00 25.00 5.00 3.00 4.00 23.00 5.00 2.00 352.00 6.00 And inputting 3 for rows and 3 for columns, I get : 5.00 4.00 5.00 6.00 5.00 4.00 3.00 25.00 5.00 Which is obviously wrong , its reading line by line rather than skipping the unmentioned column ... What am I doing wrong ? Thanks !

    Read the article

  • Import de-normalized relational data from Excel into SQL Server

    - by roryf
    I need to import data from an Excel spreadsheet into SQL Server, but the data isn't in a relational/normalized format so the import wizard isn't going to cut it (as far as I know). The data is in this format: Category SubCategory Name Description Category#1 SubCategory#1 Product#1 Description#1 Category#1 SubCategory#1 Product#2 Description#2 Category#1 SubCategory#2 Product#3 Description#3 Category#1 SubCategory#2 Product#4 Description#4 Category#2 SubCategory#3 Product#5 Description#5 (apologies I'm lacking the inventiveness to come up with 'real' data at this time in the morning...) Each row contains a unique product, but the cateogry structure is duplicated. I want to import this data into three tables: Category SubCategory Product (I know SubCategory should really be contained within Category, DB was not my design) I need a way to import unique rows based on the Category and then SubCategory columns, and then when importing the other columns into Product, obtain a reference to the SubCategory based on name. Short of scripting this, is there any way to do it using the import wizard or some other tool?

    Read the article

  • Normalise this Table?

    - by Abs
    Hello all, I am creating a social bookmarking app. I am having a re-thought of the DB design in the middle of development. Should I normalise the bookmarks table and remove the tag columns that I have into a separate table. I have 10 tags per bookmark and therefore 10 columns per record (per bookmark). It seems to me that breaking the table into two would just mean I would have to do a join but the way I currently have it, its a straight select - but the table doesn't feel right...? Thanks all

    Read the article

  • asp.net checkbox in gridview - checked property is missing

    - by Peter PitLock
    In this asp.net gridview control, the checked property is always missing. I need to access the checked property via jquery Gridview source: <Columns> <asp:TemplateField> <ItemTemplate> <asp:CheckBox ID="chkSelected" runat="server" class="chkSummarySelection" /> </ItemTemplate> </asp:TemplateField> </Columns> Renders as : <input type="checkbox" name="ctl00$ContentPlaceHolder1$gv$ctl02$SelectedCheckBox" id="ctl00_ContentPlaceHolder1_gv_ctl02_SelectedCheckBox"> There is no checked property to access. I have tried $(".chkSummarySelection").click(function () { var chk; chk = $(this).prop("checked"); chk = $(this).attr("checked"); chk = $(this).is(":checked"); chk = $(this).attr("value"); chk = $(this).val(); chk = jQuery(this).is(':checked'); }); but nothing is working

    Read the article

  • Are batch mutations atomic in Cassandra?

    - by user317459
    The Cassandra API supports batch mutations: batch_mutate(keyspace, mutation_map, consistency_level): Executes the specified mutations on the keyspace. mutation_map is a map; the outer map maps the key to the inner map, which maps the column family to the Mutation; can be read as: map. To be more specific, the outer map key is a row key, the inner map key is the column family name. A Mutation specifies either columns to insert or columns to delete. See Mutation and Deletion above for more details. Are all mutations that are executed in a batch executed atomically? So if one of the mutations fails, do the others fail too?

    Read the article

  • BULK INSERT from one table to another all on the server

    - by steve_d
    I have to copy a bunch of data from one database table into another. I can't use SELECT ... INTO because one of the columns is an identity column. Also, I have some changes to make to the schema. I was able to use the export data wizard to create an SSIS package, which I then edited in Visual Studio 2005 to make the changes desired and whatnot. It's certainly faster than an INSERT INTO, but it seems silly to me to download the data to a different computer just to upload it back again. (Assuming that I am correct that that's what the SSIS package is doing). Is there an equivalent to BULK INSERT that runs directly on the server, allows keeping identity values, and pulls data from a table? (as far as I can tell, BULK INSERT can only pull data from a file) Edit: I do know about IDENTITY_INSERT, but because there is a fair amount of data involved, INSERT INTO ... SELECT is kinda of slow. SSIS/BULK INSERT dumps the data into the table without regards to indexes and logging and whatnot, so it's faster. (Of course creating the clustered index on the table once it's populated is not fast, but it's still faster than the INSERT INTO...SELECT that I tried in my first attempt) Edit 2: The schema changes include (but are not limited to) the following: 1. Splitting one table into two new tables. In the future each will have its own IDENTITY column, but for the migration I think it will be simplest to use the identity from the original table as the identity for the both new tables. Once the migration is over one of the tables will have a one-to-many relationship to the other. 2. Moving columns from one table to another. 3. Deleting some cross reference tables that only cross referenced 1-to-1. Instead the reference will be a foreign key in one of the two tables. 4. Some new columns will be created with default values. 5. Some tables aren’t changing at all, but I have to copy them over due to the "put it all in a new DB" request.

    Read the article

  • How to change rows in a table based on other table rows in mysql?

    - by understack
    I've a table which has 3 columns: id, a_id and b_id. Suppose rows are like this: 1, a1, b1 2, a1, b2 3, a1, b3 4, a2, b4 5, a2, b5 6, a2, b6 I want to convert it to 1, a1, b1 2, a1, b1 3, a1, b1 4, a2, b4 5, a2, b4 6, a2, b4 So I want to make all the b_id corresponding to a_id same, and equal to the one which is found first. How can I do this? For simplicity, I've removed other columns from table. So please ignore row duplication here.

    Read the article

  • MySQL load data null values

    - by SP1
    Hello, I have a file that can contain from 3 to 4 columns of numerical values which are separated by comma. Empty fields are defined with the exception when they are at the end of the row: 1,2,3,4,5 1,2,3,,5 1,2,3 The following table was created in MySQL: +-------+--------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+--------+------+-----+---------+-------+ | one | int(1) | YES | | NULL | | | two | int(1) | YES | | NULL | | | three | int(1) | YES | | NULL | | | four | int(1) | YES | | NULL | | | five | int(1) | YES | | NULL | | +-------+--------+------+-----+---------+-------+ I am trying to load the data using MySQL LOAD command: load data infile '/tmp/testdata.txt' into table moo fields terminated by "," lines terminated by "\n"; The resulting table: +------+------+-------+------+------+ | one | two | three | four | five | +------+------+-------+------+------+ | 1 | 2 | 3 | 4 | 5 | | 1 | 2 | 3 | 0 | 0 | | 1 | 2 | 3 | NULL | NULL | +------+------+-------+------+------+ The problem lies with the fact that when a field is empty in the raw data and is not defined, MySQL for some reason does not use the columns default value (which is NULL) and uses zero. NULL is used correctly when the field is missing alltogether. Unfortunately, I have to be able to distinguish between NULL and 0 at this stage so any help would be appreciated. Thanks S.

    Read the article

  • copy the item with the metadata that user just added to a folder called "NativeFiles". this folder i

    - by James
    I need to copy the item that user just added (for example. myresume.doc or financial.xls) with the metadata (doc lib obtains the columns from content type, ct obtains the columns from site colum) and copy the item with metadata in a folder called "NativeFile". Every doc library has this folder. I know itemadded can be used but then I heard itemadded fires before user have a chance to complete the metadata for the item they just added. What are my options? (new to sp, so some sample code would greatly help. or some good link similar to this issue) Sharepoint 2007, itemadded or itemadding or itemupdating or itemupdated....

    Read the article

  • Custom Parser for JQuery Tablesorter

    - by Tim
    I'm using the jQuery Tablesorter and have an issue with the order in which parsers are applied against table columns. I'm adding a custom parser to handle currency of the form $-3.33. $.tablesorter.addParser({ id: "fancyCurrency", is: function(s) { return /^\$[\-]?[0-9,\.]*$/.test(s); }, format: function(s) { s = s.replace(/[$,]/g,''); return $.tablesorter.formatFloat( s ); }, type: "numeric" }); The problem seems to be that the built-in currency parser takes precedence over my custom parser. I could put the parser in the tablesorter code itself (before the currency parser) and it works properly, but this isn't very maintainable. I can't specify the sorter manually using something like: headers: { 3: { sorter: "fancyNumber" }, 11: { sorter: "fancyCurrency" } } since the table columns are generated dynamically from user inputs. I guess one option would be to specify the sorter to use as a css class and use some JQuery to explicitly specify a sorter like this question suggests, but I'd prefer to stick with dynamic detection if possible.

    Read the article

  • slow record deletion with large ntext values

    - by asking
    I'm having trouble deleting some records via a stored procedure from a table in SQLServer 2008R2 that has ntext columns. The stored proc is timing out and running the query directly takes a very long time. The initial query was a straight "delete from y where x = z" and I've also tried running it in batches of 1000 with transactions but it is still slow and timing out in a stored proc. The majority of the records in the table will not be deleted each time (it's not just a once-off query but will be run other times). The ntext columns are not used in the where clause and I can't change the column types. Any suggestions on the quickest way to delete records with large ntext values? Thanks

    Read the article

  • SQL - Joining multiple records to one record

    - by ho
    I've got a SQL Server database with the the following tables: Client (ClientID, ClientName) SalesAgent (AgentID, AgentName) Item (ItemID, Description) Purchase (PurchaseID, ClientID, Price) PurchaseSalesAgent (PurchaseID, AgentID) Each purchase is only ever one item to one client but there can have been multiple agents involved. I want to return the following list of columns: ClientName, Description, Price, Agents Where Agents is the names of all the agents involved in the purchase. Either as a comma separated list or as multiple columns with one agent in each. I'm looking for a way that's compatible with SQL Server 2000 but I'd also be interested in if there's a better way of doing it in SQL Server 2008.

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >