Search Results

Search found 37647 results on 1506 pages for 'sql performance'.

Page 343/1506 | < Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >

  • Custom Detail in Linq-to-SQL Master-Detail DataGridViews

    - by Andres
    Hi, looking for a way to create Linq-to-SQL Master-Detail WinForms DataGridViews where the Detail part of the Linq query is a custom one. Can do fine this: DataClasses1DataContext db = new DataClasses1DataContext(".\\SQLExpress"); var myQuery = from o in db.Orders select o; dataGridView1.DataSource = new BindingSource() { DataSource = myQuery }; dataGridView2.DataSource = new BindingSource() { DataSource = dataGridView1.DataSource, DataMember = "OrderDetails" }; but I'd like to have the Detail part under my precise control, like var myQuery = from o in db.Orders join od in db.OrderDetails on o.ID equals od.OrderID into MyOwnSubQuery select o; and use it for the second grid: dataGridView2.DataSource = new BindingSource() { DataSource = dataGridView1.DataSource, DataMember = "MyOwnSubQuery" // not working... }; the real reason I want it is a bit more complex (I'd like to have the Detail part to be some not-pre-defined join actually), but I'm hoping the above conveyed the idea. Can I only have the Detail part as a plain sub-table coming out of the pre-defined relation or can I do more complex stuff with the Detail part? Does anyone else feel this is kind of limited (if the first example is the best we can do)? Thanks!

    Read the article

  • SQL 2 INNER JOINS with 3 tables

    - by Jelmer Holtes
    I've a question about a SQL query.. I'm building a prototype webshop in ASP.NET Visual Studio. Now I'm looking for a solution to view my products. I've build a database in MS Access, it consists of multiple tables. The tables which are important for my question are: Product Productfoto Foto Below you'll see the relations between the tables For me it is important to get three datatypes: Product title, price and image. The product title, and the price are in the Product table. The images are in the Foto table. Because a product can have more than one picture, there is a N - M relation between them. So I've to split it up, I did it in the Productfoto table. So the connection between them is: product.artikelnummer -> productfoto.artikelnummer productfoto.foto_id -> foto.foto_id Then I can read the filename (in the database: foto.bestandnaam) I've created the first inner join, and tested it in Access, this works: SELECT titel, prijs, foto_id FROM Product INNER JOIN Productfoto ON product.artikelnummer = productfoto.artikelnummer But I need another INNER JOIN, how could I create that? I guess something like this (this one will give me an error) SELECT titel, prijs, bestandnaam FROM Product (( INNER JOIN Productfoto ON product.artikelnummer = productfoto.artikkelnummer ) INNER JOIN foto ON productfoto.foto_id = foto.foto_id) Can anyone help me?

    Read the article

  • SQL Inner Join : DB stuck

    - by SurfingCat
    I postet this question a few days ago but I didn't explain exactly what I want. I ask the question better formulated again: To clarify my problem I added some new information: I got an MySQL DB with MyISAM tables. The two relevant tables are: * orders_products: orders_products_id, orders_id, product_id, product_name, product_price, product_name, product_model, final_price, ... * products: products_id, manufacturers_id, ... (for full information about the tables see screenshot products (Screenshot) and screenshot orders_products (Screenshot)) Now what I want is this: - Get all Orders who ordered products with manufacturers_id = 1. And the product name of the product of this order (with manufacturers_id = 1). Grouped by orders. What I did so far is this: SELECT op.orders_id, p.products_id, op.products_name, op.products_price, op.products_quantity FROM orders_products op , products p INNER JOIN products ON op.products_id = p.products_id WHERE p.manufacturers_id = 1 AND p.orders_id > 10000 p.orders_id 10000 for testing to get only a few order_id's. But thies query takes much time to get executed if it even works. Two times the sql server stucked. Where is the mistake?

    Read the article

  • how to split a very large database on sql server

    - by ken jackson
    I have a 90 GB SQL Server database that I want to make more manageable. It stores stock data from 50+ different stocks from 2009 and 2010, and each stock is a separate table. Some tables have hundreds of millions of rows, and other have just a few million. What I want to do is somehow split the database, so that I don't have a single database file that is 90 GB. What I want is to be able to somehow magically split all the tables so that I can backup the 2009 data once and not have to keep on including it in the backup every time I backup the entire database, however, I would like the 2009 data to be included whenever I do a query. Is partitioning the database the way to go? Will it do the above for me, or will I need some other solution? I research partitioning, but I wasn't sure if that would solve all my problems. I wasn't able to find anything that would tell me whether or not it would migrate prexisting data, or whether it only worked for newly inserted data. Any help or pointers would be much appreciated. Thanks in advance, Ken

    Read the article

  • What is a good automated data import method for SQL Server?

    - by Joel Potter
    I'm in the process of porting some SQL Server 2005 databases to SQL Server 2008. One of these databases has an associated import application (Windows task) which uses SSIS with a DTS package to import a large dataset from an MS Access database nightly. In upgrading to SQL Server 2008, I discovered that I can't run the same console application which has been performing the imports due to the missing manageddts DLL in SQL Server 2008. It's several years old and in need of a rewrite for various reason, plus, I've been fairly unhappy with DTS in general. The original reason DTS was chosen was for speed (5 min import time compared to 30+ for ADO.NET). The format of the data to import is out of my control (the client likes Access). I would also like to be able to run the import from a machine completely separate from the server hosting SQL Server and preferably with minimal SQL features installed. Options I've considered: Creating an Access application to connect to both databases (SQL Server and Access) and perform the import (Ugh!) Revisiting ADO.NET to see if the original implementation was poorly written. Updated SSIS packages. What other technologies should I be considering for this job?

    Read the article

  • Convert SQL to LINQ in MVC3 with Ninject

    - by Jeff
    I'm using MVC3 and still learning LINQ. I'm having some trouble trying to convert a query to LINQ to Entities. I want to return an employee object. SELECT E.EmployeeID, E.FirstName, E.LastName, MAX(EO.EmployeeOperationDate) AS "Last Operation" FROM Employees E INNER JOIN EmployeeStatus ES ON E.EmployeeID = ES.EmployeeID INNER JOIN EmployeeOperations EO ON ES.EmployeeStatusID = EO.EmployeeStatusID INNER JOIN Teams T ON T.TeamID = ES.TeamID WHERE T.TeamName = 'MyTeam' GROUP BY E.EmployeeID, E.FirstName, E.LastName ORDER BY E.FirstName, E.LastName What I have is a few tables, but I need to get only the newest status based on the EmployeeOpertionDate. This seems to work fine in SQL. I'm also using Ninject and set my query to return Ienumerable. I played around with the group by option but it then returns IGroupable. Any guidance on converting and returning the property object type would be appreciated. Edit: I started writing this out in LINQ but I'm not sure how to properly return the correct type or cast this. public IQueryable<Employee> GetEmployeesByTeam(int teamID) { var q = from E in context.Employees join ES in context.EmployeeStatuses on E.EmployeeID equals ES.EmployeeID join EO in context.EmployeeOperations on ES.EmployeeStatusID equals EO.EmployeeStatusID join T in context.Teams on ES.TeamID equals T.TeamID where T.TeamName == "MyTeam" group E by E.EmployeeID into G select G; return q; } Edit2: This seems to work for me public IQueryable<Employee> GetEmployeesByTeam(int teamID) { var q = from E in context.Employees join ES in context.EmployeeStatuses on E.EmployeeID equals ES.EmployeeID join EO in context.EmployeeOperations.OrderByDescending(eo => eo.EmployeeOperationDate) on ES.EmployeeStatusID equals EO.EmployeeStatusID join T in context.Teams on ES.TeamID equals T.TeamID where T.TeamID == teamID group E by E.EmployeeID into G select G.FirstOrDefault(); return q; }

    Read the article

  • LinqToSQL not updating database

    - by codegarten
    Hi. I created a database and dbml in visual studio 2010 using its wizards. Everything was working fine until i checked the tables data (also in visual studio server explorer) and none of my updates were there. using (var context = new CenasDataContext()) { context.Log = Console.Out; context.Cenas.InsertOnSubmit(new Cena() { id = 1}); context.SubmitChanges(); } This is the code i am using to update my database. At this point my database has one table with one field (PK) named ID. *INSERT INTO [dbo].Cenas VALUES (@p0) -- @p0: Input Int (Size = -1; Prec = 0; Scale = 0) [1] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 4.0.30319.1* This is LOG from the execution (printed the context log into the console). The problem i'm having is that these updates are not persistent in the database. I mean that when i query my database (visual studio server explorer - new query) i see the table is empty, every time. I am using a SQL Server database file (.mdf).

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • Entity Framework 4 and SQL Compact 4: How to generate database?

    - by David Veeneman
    I am developing an app with Entity Framework 4 and SQL Compact 4, using a Model First approach. I have created my EDM, and now I want to generate a SQL Compact 4.0 database to act as a data store for the model. I bring up the Generate Database Wizard and click the New Connection button to create a connection for the generated file. The Choose Data Source dialog appears, but SQL Compact 4.0 is not listed in the list of available data sources: I am running VS 2010 SP1 (beta) and I have installed the VS 2010 Tools for SQL Compact 4.0. I can create a SQL Compact 4.0 data connection from the Server Explorer. It is only in the Generate Database Wizard that the 4.0 option doesn't appear. BTW, my SQL Compact 4.0 installation does include System.Data.SqlServerCe.Entity.dll. So I should have the SQL Compact components I need. Am I doing something incorrectly, or is this a bug? Does anyone have a fix or a workaround? Thanks for your help.

    Read the article

  • How to Store and Retrieve Images Using MsSQL (Server Management Studio)

    - by Joe Majewski
    I am having difficulties when trying to insert files into an MsSQL database. I'll try to break this down as best as I can: What data type should I be using to store image files (jpeg/png/gif/etc)? Right now my table is using the image data type, but I am curious if varbinary would be a better option. How would I go about inserting the image into the database? Does Microsoft SQL Server Management Studio have any built in functions that allow insertions of files into tables? If so, how is that done? Also, how could this be done through the use of an HTML form with PHP handling the input data and placing it into the table? How would I fetch the image from the table and display it on the page? I understand how to SELECT the cell's contents, but how would I go about translating that into a picture. Would I have to have a header(Content type: image/jpeg)? I have no problem doing any of these things with MySQL, but the MsSQL environment is still new to me, and I am working on a project for my job that requires the use of stored procedures to grab various data. Any and all help is appreciated. Thank you very much for your responses!

    Read the article

  • Remove redundant SQL code

    - by Dave Jarvis
    Code The following code calculates the slope and intercept for a linear regression against a slathering of data. It then applies the equation y = mx + b against the same result set to calculate the value of the regression line for each row. Can the two separate sub-selects be joined so that the data and its slope/intercept are calculated without executing the data gathering part of the query twice? SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR * ymxb.SLOPE + ymxb.INTERCEPT as REGRESSION_LINE, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D, (SELECT ((avg(t.AMOUNT * t.YEAR)) - avg(t.AMOUNT) * avg(t.YEAR)) / (stddev( t.AMOUNT ) * stddev( t.YEAR )) as CORRELATION, ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM ( SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ) t ) ymxb WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR Question How do I execute the duplicate bits only once per query, instead of twice? The duplicate bit is the WHERE clause: $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' Related http://stackoverflow.com/questions/1595659/how-to-eliminate-duplicate-calculation-in-sql Thank you!

    Read the article

  • ASP.net MVC Linq-To-SQL Many-To-Many Field Binding

    - by user336858
    Hi there, The short version of this question is "Is there a way to gracefully handle database insertion for an object that has a many-to-many field that has been set up in a partial class?" Apologies if it's been asked before. Example Suppose I have a typical MVC setup with the tables: Posts {PostID, ...} Categories {CategoryID, ...} A post can have more than one category, and a category can identify more than one post. Thus suppose further that I need an extra table: PostCategories {PostID, CategoryID, ...} This handles the many-to-many relationship between posts and categories. As far as I know, there's no way to do this in Linq-to-SQL right now so I have to shoehorn it in by adding a partial Post class to the project to add that functionality. Something like: public partial class Post { public IEnumerable<Category> Categories{ get { ... } set { ... } } } So I can now create a "Create" view that automatically populates a "Categories" UI item. This is where the trouble starts. So here's my question: How do you get automatic object model binding to work cleanly with an object that has a many-to-many relationship to control? The workaround that makes many-to-many relationships possible relies on the Post object having a PostID in order to be associated with CategoryID(s), which is only issued after the Post object has been submitted for validation and insertion. Bit of a Catch22 here. Any terminology, links, or tips you can provide would be tremendously helpful!

    Read the article

  • Android performance/issues with Corona SDK?

    - by B5Fan74
    I know this is a fairly broad question. We are looking to develop a mobile game and want to use a multi-platform engine/SDK. We like what we see with Corona but in doing some reading, we are seeing a lot of references to poor performance on the 'droid platforms. I am unsure how much of this is still relevant? Many of the articles/posts/references/discussions vary in date from 18 months ago to earlier this year. Is there a reason we should not pursue Corona if Android support is important to us? The game is going to be 2D isometric view. Thanks!

    Read the article

  • SQL Query to duplicate records based on If statement

    - by user328371
    Hi, I'm trying to write an SQL query that will duplicate records depending on a field in another table. I am running mySQL 5. (I know duplicating records shows that the database structure is bad, but I did not design the database and am not in a position to redo it all - it's a shopp ecommerce database running on wordpress.) Each product with a particular attribute needs a link to the same few images, so the product will need a row per image in a table - the database doesn't actually contain the image, just its filename. (the images are of clipart for a customer to select from) Based on these records... SELECT * FROM `wp_shopp_spec` WHERE name='Can Be Personalised' and content='Yes' I want to do something like this.. For each record that matches that query, copy records 5134 - 5139 from wp_shopp_asset but change the id so it's unique and set the cell in column 'parent' to have the value of 'product' from the table wp_shopp_spec. This will mean 6 new records are created for each record matching the above query, all with the same value in 'parent' but with unique ids and every other column copied from the original (ie. records 5134-5139) Hope that's clear enough - any help greatly appreciated.

    Read the article

  • problems with graphic performance in Ubuntu 12.04

    - by Falk
    in advance: my English is not perfect ;) processor: Intel® Core™2 Duo CPU E8500 @ 3.16GHz × 2 memory: 3,9 GiB graphics: GeForce GTX 460 Ubuntu 12.04 32-Bit fresh installed (previously 11.10 32-Bit) driver: NVIDIAs accelerated Driver (Version current-updates) probelm: In Ubuntu 12.04 I have problems with the performance of the graphics (games for example: Volley Brwawl, Neverball, Beep and Minecraft), and effects of windows (always at minimize, maximize...) There are laggs at animations, and the laggs in games are extremely annoying. Problems occur only in Ubuntu 12.04, everything was good in 11.10. No Problems with YouTube videos, HD videos. Is there a solution? BBecause I have found nothing here in the Forum and on Google. Or is an update coming soonfor driver or whatever? (This Bug is already registered in Launchpad.) Thank you!

    Read the article

  • SSL over TDS, SQL Server 2005 Express

    - by reuvenab
    I capture packets sent/received by Win Xp machine when connecting to SQL Server 2005 Express using TLS encryption. Server and Client exchange Hello messages Server and Client send ChangeCipherSpec message Then Server and Client server send strange message that is not described in TLS protocol What is the message and if SSL over TDS is standard compliant at all? Server side capture: 16 **SSL Handshake** 03 01 00 4a 02 ServerHello 00 00 46 03 01 4b dd 68 59 GMT 33 13 37 98 10 5d 57 9d ff 71 70 dc d6 6f 9e 2c Random[00..13] cb 96 c0 2e b3 2f 9b 74 67 05 cc 96 Random[14..27] 20 72 26 00 00 0f db 7f d9 b0 51 c2 4f cd 81 4c Session ID 3f e3 d2 d1 da 55 c0 fe 9b 56 b7 6f 70 86 fe bb Session ID 54 Session ID 00 04 Cipher Suite 00 Compression 14 03 01 00 01 01 **ChangeCipherSpec** 16 03 01 ???? Finished ??? 00 20 d0 da cc c4 36 11 43 ff 22 25 8a e1 38 2b ???? ??? 71 ce f3 59 9e 35 b0 be b2 4b 1d c5 21 21 ce 41 ???? ??? 8e 24

    Read the article

  • Distribute budget over for ranked components in SQL

    - by Lee
    Assume I have a budget of $10 (any integer) and I want to distribute it over records which have rank field with varying needs. Example: rank Req. Fulfilled? 1 $3 Y 2 $4 Y 3 $2 Y 4 $3 N Those ranks from 1 to 3 should be fulfilled because they are within budget. whereas, the one ranked 4 should not. I want an SQL query to solve that. Below is my initial script: CREATE TABLE budget ( id VARCHAR (32), budget INTEGER, PRIMARY KEY (id)); CREATE TABLE component ( id VARCHAR (32), rank INTEGER, req INTEGER, satisfied BOOLEAN, PRIMARY KEY (id)); INSERT INTO budget (id,budget) VALUES ('1',10); INSERT INTO component (id,rank,req) VALUES ('1',1,3); INSERT INTO component (id,rank,req) VALUES ('2',2,4); INSERT INTO component (id,rank,req) VALUES ('3',3,2); INSERT INTO component (id,rank,req) VALUES ('4',4,3); Thanks in advance for your help. Lee

    Read the article

  • Oracle SQL: ROLLUP not summing correctly

    - by tommy-o-dell
    Hi guys, Rollup seems to be working correcly to count the number of units, but not the number of trains. Any idea what could be causing that? The output from the query looks like this. The sum of the Units column in yellow is 53 but the rollup is showing 51. The number of units adds up correctly though... And here's the oracle SQL query... select t.year, t.week, decode(t.mine_id,NULL,'PF',t.mine_id) as mine_id, decode(t.product,Null,'LF',t.product) as product, decode(t.mine_id||'-'||t.product,'-','PF',t.mine_id||'-'||t.product) as code, count(distinct t.tpps_train_id) as trains, count(1) as units from ( select trn.mine_code as mine_id, trn.train_tpps_id as tpps_train_id, round((con.calibrated_weight_total - con.empty_weight_total),2) as tonnes from widsys.train trn INNER JOIN widsys.consist con USING (train_record_id) where trn.direction = 'N' and (con.calibrated_weight_total-con.empty_weight_total) > 10 and trn.num_cars > 10 and con.consist_no not like '_L%' ) w, ( select to_char(td.datetime_act_comp_dump-7/24, 'IYYY') as year, to_char(td.datetime_act_comp_dump-7/24, 'IW') as week, td.mine_code as mine_id, td.train_id as tpps_train_id, pt.product_type_code as product from tpps.train_details td inner join tpps.ore_products op using (ore_product_key) inner join tpps.product_types pt using (product_type_key) where to_char(td.datetime_act_comp_dump-7/24, 'IYYY') = 2010 and to_char(td.datetime_act_comp_dump-7/24, 'IW') = 12 order by td.datetime_act_comp_dump asc ) t where w.mine_id = t.mine_id and w.tpps_train_id = t.tpps_train_id having t.product is not null or t.mine_id is null group by t.year, t.week, rollup( t.mine_id, t.product)

    Read the article

  • Moving from a non-clustered PK to a clustered PK in SQL 2005

    - by adaptr
    HI all, I recently asked this question in another thread, and thought I would reproduce it here with my solution: What if I have an auto-increment INT as my non-clustered primary key, and there are about 15 foreign keys defined to it ? (snide comment about original designer being braindead in the original :) ) This is a 15M row table, on a live database, SQL Standard, so dropping indexes is out of the question. Even temporarily dropping the foreign key constraints will be difficult. I'm curious if anybody has a solution that causes minimal downtime. I tested this in our testing environment and finally found that the downtime wasn't as severe as I had originally feared. I ended up writing a script that drops all FK constraints, then drops the non-clustered key, re-creates the PK as a clustered index, and finally re-created all FKs WITH NOCHECK to avoid trawling through all FKs to check constraint compliance. Then I just enable the CHECK constraints to enable constraint checking from that point onwards, and all is dandy :) The most important thing to realize is that during the time the FKs are absent, there MUST NOT be any INSERTs or DELETEs on the parent table, as this may break the constraints and cause issues in the future. The total time taken for clustering a 15M row, 800MB index was ~4 minutes :)

    Read the article

  • Delaying LINQ to SQL Select Query Execution

    - by Maxim Z.
    I'm building an ASP.NET MVC site that uses LINQ to SQL. In my search method that has some required and some optional parameters, I want to build a LINQ query while testing for the existence of those optional parameters. Here's what I'm currently thinking: using(var db = new DBDataContext()) { IQueryable<Listing> query = null; //Handle required parameter query = db.Listings.Where(l => l.Lat >= form.bounds.extent1.latitude && l.Lat <= form.bounds.extent2.latitude); //Handle optional parameter if (numStars != null) query = query.Where(l => l.Stars == (int)numStars); //Other parameters... //Execute query (does this happen here?) var result = query.ToList(); //Process query... Will this implementation "bundle" the where clauses and then execute the bundled query? If not, how should I implement this feature? Also, is there anything else that I can improve? Thanks in advance.

    Read the article

  • Self referencing update SQL statement for Informix

    - by CheeseConQueso
    Need some Informix SQL... Courses get a regular grade, but their associated labs get a grade of 'LAB'. I need to update the table so that the lab grade matches the course grade. Also, if there is no corresponding course for a lab, it means the course was canceled. In that case, I want to place a flag value of 'X' for its grade. Example data before update: id yr sess crs_no hrs grd 725 2009 FA COLL101 3.000000000000 C 725 2009 FA ENGL021 3.000000000000 FI 725 2009 FA ENGL021L 1.000000000000 LAB 725 2009 FA ENGL031 3.000000000000 FNI 725 2009 FA ENGL031L 1.000000000000 LAB 725 2009 FA MATH010 3.000000000000 FNI 725 2010 SP AOTE101 3.000000000000 C 725 2010 SP ENGL021L 1.000000000000 LAB 725 2010 SP ENGL031 3.000000000000 FI 725 2010 SP ENGL031L 1.000000000000 LAB 725 2010 SP MATH010 3.000000000000 FNI 726 2010 SP SPAN101 3.000000000000 FN Example data after update: id yr sess crs_no hrs grd 725 2009 FA COLL101 3.000000000000 C 725 2009 FA ENGL021 3.000000000000 FI 725 2009 FA ENGL021L 1.000000000000 FI 725 2009 FA ENGL031 3.000000000000 FNI 725 2009 FA ENGL031L 1.000000000000 FNI 725 2009 FA MATH010 3.000000000000 FNI 725 2010 SP AOTE101 3.000000000000 C 725 2010 SP ENGL021L 1.000000000000 X 725 2010 SP ENGL031 3.000000000000 FI 725 2010 SP ENGL031L 1.000000000000 FI 725 2010 SP MATH010 3.000000000000 FNI 726 2010 SP SPAN101 3.000000000000 FN I worked out a solution for this, but it required a lot of on-the-fly composite foreign keys built from concatenating the id, yr, sess, and substring'd crs_no. My solution is not only overkill, but it has gaps in it and it takes too long to process. I know there is an easier way to do this, but I've gone so far down one road that I am having trouble thinking of a different approach.

    Read the article

  • Google I/O 2012 - High Performance HTML5

    Google I/O 2012 - High Performance HTML5 For years we built web apps that far outpaced the capabilities of the browsers they ran in. Just as the browsers were catching up HTML5 came on the scene - video and audio, canvas, SVG, app cache, localStorage, @font-face, and more. Now the browsers are racing to stay ahead of the wave that's building as developers adopt these new capabilities. Is your HTML5 app going to ride the wave or be dashed on the rocks leaving users stranded? Learn which HTML5 features to seek out and which to avoid when it comes to building fast HTML5 web apps. This session will be live captioned. From: GoogleDevelopers Views: 91 1 ratings Time: 01:02:07 More in Science & Technology

    Read the article

  • ASP.NET Menu, NavBar And Pager Performance Improvements v2010 vol 1

    Check out the improvements weve made to some of our ASP.NET controls in the DXperience v2010.1 release. We changed the rendering of our ASP.NET AJAX Menu, Navigation Pane and Pager controls. The controls now use semantic rendering combined with advanced CSS styles, which results in a dramatic decrease of HTML output, improved performance and a reduction in the servers workload. Also, several of our other ASP.NET controls like the ASPxGridView and ASPxScheduler also benefit because The primary...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >