Search Results

Search found 16059 results on 643 pages for 'global temp tables'.

Page 283/643 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • DB Design to store custom fields for a table

    - by Fazal
    Hi All, this question came up based on the responses I got for the question http://stackoverflow.com/questions/2785033/getting-wierd-issue-with-to-number-function-in-oracle As everyone suggested that storing Numeric values in VARCHAR2 columns is not a good practice (which I totally agree with), I am wondering about a basic Design choice our team has made and whether there are better way to design. Problem Statement : We Have many tables where we want to give certain number of custom fields. The number of required custom fields is known, but what kind of attribute is mapped to the column is available to the user E.g. I am putting down a hypothetical scenario below Say you have a laptop which stores 50 attribute values for every laptop record. Each laptop attributes are created by the some admin who creates the laptop. A user created a laptop product lets say lap1 with attributes String, String, numeric, numeric, String Second user created laptop lap2 with attributes String,numeric,String,String,numeric Currently there data in our design gets persisted as following Laptop Table Id Name field1 field2 field3 field4 field5 1 lap1 lappy lappy 12 13 lappy 2 lap2 lappy2 13 lappy2 lapp2 12 This example kind of simulates our requirement and our design Now here if somebody is lookinup records for lap2 table doing a comparison on field2, We need to apply TO_NUMBER. select * from laptop where name='lap2' and TO_NUMBER(field2) < 15 TO_NUMBER fails in some cases when query plan decides to first apply to_number instead of the other filter. QUESTION Is this a valid design? What are the other alternative ways to solve this problem One of our team mates suggested creating tables on the fly for such cases. Is that a good idea How do popular ORM tools give custom fields or flex fields handling? I hope I was able to make sense in the question. Sorry for such a long text.. This causes us to use TO_NUMBER when queryio

    Read the article

  • In an Android application, should I have one content provider per table or only one for the entire a

    - by Andrew Dyer
    I have years of experience with Microsoft .NET development (primarily C#) and have been working to come up to speed on Android and Java. So far, I've built a small application with a couple screens and a working content provider. All of the examples I've seen for developing content providers typically work with a single table, so I got the impression that this was the convention. I built a couple more content providers for other tables and ran into the "Unknown URI" IllegalArgumentException when I tried to test them. The exception is being thrown by one of my content providers, but not the one I was intending to call. It appears that my application is using the first content provider in the AndroidManifest.xml file, which now has me wondering if I should only have a single content provider for the entire application. Are there any best practices and/or examples for working with multiple tables in an Android application? Should I have one content provider per table or only one for the entire application? If the former, how do I resolve URIs to the proper provider? If the latter, how do I keep my content provider code from being polluted with switch statements?

    Read the article

  • How can I manage a FIFO-queue in an database with SQL?

    - by Jonas
    I have two tables in my database, one for In and one for Out. They have two columns, Quantity and Price. How can I write a SQL-query that selects the correct price? In example: If I have 3 items in for 75 and then 3 items in for 80. Then I have two out for 75, and the third out should be for 75 (X) and the fourth out should be for 80 (Y). How can I write the price query for X and Y? They should use the price from the third and forth row. In example, is there any way to SELECT the third row in the In-table? I can not use auto_increment as identifier for i.e. "third" row, because the tables will contain post for other items too. The rows will not be deleted, they will be saved for accountability reasons. SELECT Price FROM In WHERE ...? NEW database design: +----+ | In | +----+------+-------+ | Supply_ID | Price | +-----------+-------+ | 1 | 75 | | 1 | 75 | | 1 | 75 | | 2 | 80 | | 2 | 80 | +-----------+-------+ +-----+ | Out | +-----+-------+-------+ | Delivery_ID | Price | +-------------+-------+ | 1 | 75 | | 1 | 75 | | 2 | X | <- ? | 3 | Y | <- ? +-------------+-------+ OLD database design: +----+ | In | +----+------+----------+-------+ | Supply_ID | Quantity | Price | +-----------+----------+-------+ | 1 | 3 | 75 | | 2 | 3 | 80 | +-----------+----------+-------+ +-----+ | Out | +-----+-------+----------+-------+ | Delivery_ID | Quantity | Price | +-------------+----------+-------+ | 1 | 2 | 75 | | 2 | 1 | X | <- ? | 3 | 1 | Y | <- ? +-------------+----------+-------+

    Read the article

  • Reduce durability in MySQL for performance

    - by Paul Prescod
    My site occasionally has fairly predictable bursts of traffic that increase the throughput by 100 times more than normal. For example, we are going to be featured on a television show, and I expect in the hour after the show, I'll get more than 100 times more traffic than normal. My understanding is that MySQL (InnoDB) generally keeps my data in a bunch of different places: RAM Buffers commitlog binary log actual tables All of the above places on my DB slave This is too much "durability" given that I'm on an EC2 node and most of the stuff goes across the same network pipe (file systems are network attached). Plus the drives are just slow. The data is not high value and I'd rather take a small chance of a few minutes of data loss rather than have a high probability of an outage when the crowd arrives. During these traffic bursts I would like to do all of that I/O only if I can afford it. I'd like to just keep as much in RAM as possible (I have a fair chunk of RAM compared to the data size that would be touched over an hour). If buffers get scarce, or the I/O channel is not too overloaded, then sure, I'd like things to go to the commitlog or binary log to be sent to the slave. If, and only if, the I/O channel is not overloaded, I'd like to write back to the actual tables. In other words, I'd like MySQL/InnoDB to use a "write back" cache algorithm rather than a "write through" cache algorithm. Can I convince it to do that? If this is not possible, I am interested in general MySQL write-performance optimization tips. Most of the docs are about optimizing read performance, but when I get a crowd of users, I am creating accounts for all of them, so that's a write-heavy workload.

    Read the article

  • Can one connection get details of another? Or, how can I get the most detailed pending transaction

    - by bob-the-destroyer
    Is there a Mysql statement which provides full details of any other open connection or user? For this particular case, on myisam tables specifically. Looking at Mysql's SHOW TABLE STATUS documentation, it's missing some very important information for my purpose. For example: remote odbc connection one is inserting several thousand records, which due to a slow connection speed can take up to an hour. Tcp connection two, using PHP on the server's localhost, is running select queries with aggregate functions on that data. Before allowing connection two to run those queries, I'd like connection two to first check to make sure there's no pending inserts on any other connection on those specific tables so it can instead wait until all data is available. If the table is currently being written to, I'd like to spit back to the user of connection two an approximation of how much longer to wait based on the number of pending inserts. Ideally by table, I'd like to get back using a query the timestamp when connection one began the write, total inserts left to be done, and total inserts already completed. Instead of insert counts, even knowing number of bytes written and left to write would work just fine here. Obviously since connection two is a tcp connection via a PHP script, all I can really use in that script is some sort of query. I suppose if I have to, since it is on localhost, I can exec() it if the only way is by a mysql command line option that outputs this info, but I'd rather not. I suppose I could simply update a custom-made transaction log before and after this massive insert task which the PHP script can check, but hopefully there's already a built-in Mysql feature I can take advantage of.

    Read the article

  • Problem with SQL Server "EXECUTE AS"

    - by Vilx-
    I've got the following setup: There is a SQL Server DB with several tables that have triggers set on them (that collect history data). These triggers are CLR stored procedures with EXECUTE AS 'HistoryUser'. The HistoryUser user is a simple user in the database without a login. It has enough permissions to read from all tables and write to the history table. When I backup the DB and then restore it to another machine (Virtual Machine in this case, but it does not matter), the triggers don't work anymore. In fact, no impersonation for the user works anymore. Even a simple statement such as this exec ('select 3') as user='HistoryUser' produces an error: Cannot execute as the database principal because the principal "HistoryUser" does not exist, this type of principal cannot be impersonated, or you do not have permission. I read in MSDN that this can occur if the DB owner is a domain user, but it isn't. And even if I change it to anything else (their recommended solution) this problem remains. If I create another user without login, I can use it for impersonation just fine. That is, this works just fine: create user TestUser without login go exec ('select 3') as user='TestUser' I do not want to recreate all those triggers, so is there any way how I can make the existing HistoryUser work? Bump: Sorry, but this is kinda urgent...

    Read the article

  • T4 trouble compiling transformation

    - by John Leidegren
    I can't figure this one out. Why doesn't T4 locate the IEnumerable type? I'm using Visual Studio 2010. And I just hope someone knows why? <#@ template debug="true" hostspecific="false" language="C#" #> <#@ assembly name="System.Data, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" #> <#@ import namespace="System" #> <#@ import namespace="System.Data" #> <#@ import namespace="System.Data.SqlClient" #> <#@ output extension=".cs" #> public static class Tables { <# var q = @" SELECT tbl.name 'table', col.name 'column' FROM sys.tables tbl INNER JOIN sys.columns col ON col.object_id = tbl.object_id "; // var source = Execute(q); #> } <#+ static IEnumerable Execute(string cmdText) { using (var conn = new SqlConnection(@"Data Source=.\SQLEXPRESS;Initial Catalog=t4build;Integrated Security=True;")) { conn.Open(); var cmd = new SqlCommand(cmdText, conn); using (var reader = cmd.ExecuteReader()) { while (reader.Read()) { } } } } #> Error 2 Compiling transformation: The type or namespace name 'IEnumerable' could not be found (are you missing a using directive or an assembly reference?) c:\Projects\T4BuildApp\T4BuildApp\TextTemplate1.tt 26 9

    Read the article

  • Table index design

    - by Swoosh
    I would like to add index(s) to my table. I am looking for general ideas how to add more indexes to a table. Other than the PK clustered. I would like to know what to look for when I am doing this. So, my example: This table (let's call it TASK table) is going to be the biggest table of the whole application. Expecting millions records. IMPORTANT: massive bulk-insert is adding data in this table table has 27 columns: (so far, and counting :D ) int x 9 columns = id-s varchar x 10 columns bit x 2 columns datetime x 5 columns INT COLUMNS all of these are INT ID-s but from tables that are usually smaller than Task table (10-50 records max), example: Status table (with values like "open", "closed") or Priority table (with values like "important", "not so important", "normal") there is also a column like "parent-ID" (self - ID) join: all the "small" tables have PK, the usual way ... clustered STRING COLUMNS there is a (Company) column (string!) that is something like "5 characters long all the time" and every user will be restricted using this one. If in Task there are 15 different "Companies" the logged in user would only see one. So there's always a filter on this one. Might be a good idea to add an index to this column? DATE COLUMNS I think they don't index these ... right? Or can / should be?

    Read the article

  • Use a foreign key mapping to get data from the other table using Python and SQLAlchemy.

    - by Az
    Hmm, the title was harder to formulate than I thought. Basically, I've got these simple classes mapped to tables, using SQLAlchemy. I know they're missing a few items but those aren't essential for highlighting the problem. class Customer(object): def __init__(self, uid, name, email): self.uid = uid self.name = name self.email = email def __repr__(self): return str(self) def __str__(self): return "Cust: %s, Name: %s (Email: %s)" %(self.uid, self.name, self.email) The above is basically a simple customer with an id, name and an email address. class Order(object): def __init__(self, item_id, item_name, customer): self.item_id = item_id self.item_name = item_name self.customer = None def __repr__(self): return str(self) def __str__(self): return "Item ID %s: %s, has been ordered by customer no. %s" %(self.item_id, self.item_name, self.customer) This is the Orders class that just holds the order information: an id, a name and a reference to a customer. It's initialised to None to indicate that this item doesn't have a customer yet. The code's job will assign the item a customer. The following code maps these classes to respective database tables. # SQLAlchemy database transmutation engine = create_engine('sqlite:///:memory:', echo=False) metadata = MetaData() customers_table = Table('customers', metadata, Column('uid', Integer, primary_key=True), Column('name', String), Column('email', String) ) orders_table = Table('orders', metadata, Column('item_id', Integer, primary_key=True), Column('item_name', String), Column('customer', Integer, ForeignKey('customers.uid')) ) metadata.create_all(engine) mapper(Customer, customers_table) mapper(Orders, orders_table) Now if I do something like: for order in session.query(Order): print order I can get a list of orders in this form: Item ID 1001: MX4000 Laser Mouse, has been ordered by customer no. 12 What I want to do is find out customer 12's name and email address (which is why I used the ForeignKey into the Customer table). How would I go about it?

    Read the article

  • Elegant code question: How to avoid creating unneeded object

    - by SeaDrive
    The root of my question is that the C# compiler is too smart. It detects a path via which an object could be undefined, so demands that I fill it. In the code, I look at the tables in a DataSet to see if there is one that I want. If not, I create a new one. I know that dtOut will always be assigned a value, but the the compiler is not happy unless it's assigned a value when declared. This is inelegant. How do I rewrite this in a more elegant way? System.Data.DataTable dtOut = new System.Data.DataTable(); . . // find table with tablename = grp // if none, create new table bool bTableFound = false; foreach (System.Data.DataTable d1 in dsOut.Tables) { string d1_name = d1.TableName; if (d1_name.Equals(grp)) { dtOut = d1; bTableFound = true; break; } } if (!bTableFound) dtOut = RptTable(grp);

    Read the article

  • How can manage a FIFO-queue in an database with SQL?

    - by Jonas
    I have two tables in my database, one for In and one for Out. They have two columns, Quantity and Price. How can I write a SQL-query that selects the correct price? In example: If I have 3 items in for 75 and then 3 items in for 80. Then I have two out for 75, and the third out should be for 75 (X) and the fourth out should be for 80 (Y). How can I write the price query for X and Y? They should use the price from the third and forth row. In example, is there any way to SELECT the third row in the In-table? I can not use auto_increment as identifier for i.e. "third" row, because the tables will contain post for other items too. The rows will not be deleted, they will be saved for accountability reasons. SELECT Price FROM In WHERE ...? NEW database design: +----+ | In | +----+------+-------+ | Supply_ID | Price | +-----------+-------+ | 1 | 75 | | 1 | 75 | | 1 | 75 | | 2 | 80 | | 2 | 80 | +-----------+-------+ +-----+ | Out | +-----+-------+-------+ | Delivery_ID | Price | +-------------+-------+ | 1 | 75 | | 1 | 75 | | 2 | X | <- ? | 3 | Y | <- ? +-------------+-------+ OLD database design: +----+ | In | +----+------+----------+-------+ | Supply_ID | Quantity | Price | +-----------+----------+-------+ | 1 | 3 | 75 | | 2 | 3 | 80 | +-----------+----------+-------+ +-----+ | Out | +-----+-------+----------+-------+ | Delivery_ID | Quantity | Price | +-------------+----------+-------+ | 1 | 2 | 75 | | 2 | 1 | X | <- ? | 3 | 1 | Y | <- ? +-------------+----------+-------+

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • Scroll table upwards or downwards using up and down button

    - by Psinyee
    I want to create a banner scrolling function for my homepage. I am not too sure of which programming language to use here. I want to code something similar to: http://www.squidfaceandthemeddler.com/. But I want the footer to remain at the same position while the user scroll the table. Is it possible to get that result? I have create a simple tables and 2 images for the user to click on here is my code for the tables: <div id="banner" style="height:750px; width:600px; overflow:scroll;" > <table width="750px" height="600px"> <tr> <td><a href="sale_1.php"><img src="banner1.png"/></a></td> </tr> </table> <table width="750px" height="600px"> <tr> <td><a href="sale_2.php"><img src="banner2.png"/></a></td> </tr> </table> <table width="750px" height="600px"> <tr> <td><a href="sale_3.php"><img src="banner3.png"/></a></td> </tr> </table> </div> and this is the upwards and downwards button for the scrolling: <table width="750px" height="30px"> <tr> <td align="right"><img src="images/up_arrow.png"/><img src="images/down_arrow.png"/></td> </tr> </table>

    Read the article

  • Error No 150 mySQL

    - by buddhamagnet
    Hi all. This is making me sweat - I am getting error 150 when I try and create a table in mySQL. I've scoured the forums to no avail. The statement uses foreign key constraints - both tables are InnoDB, all relevant columns have the same data type and both tables have the same charset and collation. Here's the CREATE TABLE and the original CREATE TABLE statement for the table that's being referenced. Any ideas? New table: CREATE TABLE `approval` ( `rev_id` int(10) UNSIGNED NOT NULL, `rev_page` int(10) UNSIGNED NOT NULL, `user_id` int(10) UNSIGNED NOT NULL, `date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (`rev_id`,`rev_page`,`user_id`), KEY `FK_approval_user` (`user_id`), CONSTRAINT `FK_approval_revision` FOREIGN KEY (`rev_id`, `rev_page`) REFERENCES `revision` (`rev_id`, `rev_page`), CONSTRAINT `FK_approval_user` FOREIGN KEY (`user_id`) REFERENCES `user` (`user_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; Referenced table: CREATE TABLE `revision` ( `rev_id` int(10) unsigned NOT NULL auto_increment, `rev_page` int(10) unsigned NOT NULL, `rev_text_id` int(10) unsigned NOT NULL, `rev_comment` tinyblob NOT NULL, `rev_user` int(10) unsigned NOT NULL default '0', `rev_user_text` varbinary(255) NOT NULL default '', `rev_timestamp` binary(14) NOT NULL default '\0\0\0\0\0\0\0\0\0\0\0\0\0\0', `rev_minor_edit` tinyint(3) unsigned NOT NULL default '0', `rev_deleted` tinyint(3) unsigned NOT NULL default '0', `rev_len` int(10) unsigned default NULL, `rev_parent_id` int(10) unsigned default NULL, PRIMARY KEY (`rev_id`), UNIQUE KEY `rev_page_id` (`rev_page`,`rev_id`), KEY `rev_timestamp` (`rev_timestamp`), KEY `page_timestamp` (`rev_page`,`rev_timestamp`), KEY `user_timestamp` (`rev_user`,`rev_timestamp`), KEY `usertext_timestamp` (`rev_user_text`,`rev_timestamp`) ) ENGINE=InnoDB AUTO_INCREMENT=4904 DEFAULT CHARSET=binary MAX_ROWS=10000000 AVG_ROW_LENGTH=1024;

    Read the article

  • SQL Compact performance on device

    - by Ben M
    My SQL Compact database is very simple, with just three tables and a single index on one of the tables (the table with 200k rows; the other two have less than a hundred each). The first time the .sdf file is used by my Compact Framework application on the target Windows Mobile device, the system hangs for well over a minute while "something" is done to the database: when deployed, the DB is 17 megabytes, and after this first usage, it balloons to 24 megs. All subsequent usage is pretty fast, so I'm assuming there's some sort of initialization / index building going on during this first usage. I'd rather not subject the user to this delay, so I'm wondering what this initialization process is and whether it can be performed before deployment. For now, I've copied the "initialized" database back to my desktop for use in the setup project, but I'd really like to have a better answer / solution. I've tried "full compact / repair" in the VS Database Properties dialog, but this made no difference. Any ideas? For the record, I should add that the database is only read from by the device application -- no modifications are made by that code.

    Read the article

  • Python modules, classs, functions documentation through Sphinx

    - by user343934
    Hi everyone, I am trying to document my small project through sphinx which im recently trying to get familiar with. I read some tutorials and sphinx documentation but couldn't make it. Setup and configurations are ok! just have problems in using sphinx in a technical way. My table of content should look like this --- Overview .....Contents ----Configuration ....Contents ---- System Requirements .....Contents ---- How to use .....Contents ---- Modules ..... Index ......Display ----Help ......Content Moreover my focus is on Modules with docstrings. Details of Modules are Directory:- c:/wamp/www/project/ ----- Index.py >> Class HtmlTemplate: .... def header(): .... def body(): .... def form(): .... def header(): .... __init_main: ----- display.py >> Class MainDisplay: .... def execute(): .... def display(): .... def tree(): .... __init_main: My Documentation Directory:- c:/users/abc/Desktop/Documentation/doc/ --- _build --- _static --- _templates --- conf.py --- index.rst I have added Modules directory to the system environment and edited index.rst with following codes Welcome to Seq-alignment's documentation! Contents: .. toctree:: :maxdepth: 2 .. automodule:: index.py .. autoclass:: HtmlTemplate :members:Header,Body,Form,Footer,CloseHtml .. automodule:: display.py .. autoclass:: MainDisplay :members:execute,display,tree Indices and tables :ref:genindex :ref:modindex :ref:search When i make html file and view it, apparently i dont get Modules in the content tables but just there is show record and when i click it just i get "index.txt" version in another window. I need your suggestions Thanks

    Read the article

  • MySQL query problem

    - by Luke
    Ok, I have the following problem. I have two InnoDB tables: 'places' and 'events'. One place can have many events, but event can be created without entering place. In this case event's foreign key is NULL. Simplified structure of the tables looks as follows: CREATE TABLE IF NOT EXISTS `events` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) COLLATE utf8_bin NOT NULL, `places_id` int(9) unsigned DEFAULT NULL, PRIMARY KEY (`id`), KEY `fk_events_places` (`places_id`), ) ENGINE=InnoDB; ALTER TABLE `events` ADD CONSTRAINT `events_ibfk_1` FOREIGN KEY (`places_id`) REFERENCES `places` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS `places` ( `id` int(9) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`), ) ENGINE=InnoDB; Question is, how to construct query which contains name of the event and name of the corresponding place (or no value, in case there is no place assigned?). I am able to do it with two queries, but then I am visibly separating events which have place assigned from the ones that are without place. Help really appreciated.

    Read the article

  • Sync data between a windows desktop app and windows mobile client app

    - by Chris W
    I need to knock up a very quick prototype/proof of concept application to demo to someone within the next couple of days so I've minimal time to research this as fully as I normally would. The set-up is a very simple database application running on a laptop - will only ever be a single user updating a couple of tables so I was thinking of knocking up a basic Win Forms app against SQL Compact. Visual Studio's auto generated data grid edit screens will be fine with a little customisation. The second aspect is to then add a windows mobile client application that can pull data from both tables stored on the laptop, edit some data and insert some extra rows before sending the changes back to the laptop copy of the database. I've not done any WinMo development so what's the best approach for me to look at. Is it easy enough to sync data between the two databases when the WinMo device is connected to the laptop with USB? Most of the samples I've looked at so far seem to be syncing SQL Compact with SQL Standard using IIS which seems a bit overkill. The volumes of data to be synced are so small that I can easily write some manual sync code if it's easy for me to query/update the Compact DB from the laptop application when the device is connected.

    Read the article

  • Performance of inter-database query (between linked servers)

    - by Swoosh
    I have an import between 2 linked servers. I basically got to get the data from a multiple join into a table on my side. The current query is something like this: select a.* from db1.dbo.tbl1 a inner join db1.dbo.tbl2 on ... inner join db1.dbo.tbl3 on ... inner join db1.dbo.tbl4 on ... inner join db2.dbo.myside on ... db1 = linked server db2 = my own database After this one, I am using an insert into + select to add this data in my table which is located in db2. (usually few hundred records - this import running once a minute) My question is related to performance. The tables on the linked server (tbl1, tbl2, tbl3, tbl4) are huge tables, with millions of records, and it is slowing down the import process. I was told that, if I do the join on the "other" side (db1 - linked server) for example in a stored procedure, than, even if the query looks the same, it would run faster. Is that right? This is kinda hard to test. Note that the join contains a table from my database too. Also. are there other "tricks" I could use in order to make this run faster? Thanks

    Read the article

  • SQL Server 2008 - Full Text Query

    - by user208662
    Hello, I have two tables in a SQL Server 2008 database in my company. The first table represents the products that my company sells. The second table contains the product manufacturer’s details. These tables are defined as follows: Product ------- ID Name ManufacturerID Description Manufacturer ------------ ID Name As you can imagine, I want to make this as easy as possible for our customers to query this data. However, I’m having problems writing a forgiving, yet powerful search query. For instance, I’m anticipating people to search based on phonetical spellings. Because of this, the data may not match the exact data in my database. In addition, I think some individuals will search by manufacturer’s name first, but I want the matching product names to appear first. Based on these requirements, I’m currently working on the following query: select p.Name as 'ProductName', m.Name as 'Manufacturer', r.Rank as 'Rank' from Product p inner join Manufacturer m on p.ManufacturerID=m.ID inner join CONTAINSTABLE(Product, Name, @searchQuery) as r Oddly, this query is throwing an error. However, I have no idea why. Squiggles appear to the right of the last parenthesis in management studio. The tool tip says "An expression of non-boolean type specified in a context where a condition is expected". I understand what this statement means. However, I guess I do not know how COntainsTable works. What am I doing wrong? Thank you

    Read the article

  • Where should my "filtering" logic reside with Linq-2-SQL and ASP.NET-MVC in View or Controller?

    - by Nate Bross
    I have a main Table, with several "child" tables. TableA and TableAChild1 and TableAChild2. I have a view which shows the information in TableA, and then has two columns of all items in TableAChild1 and TableAChild2 respectivly, they are rendered with Partial views. Both child tables have a bit field for VisibleToAll, and depending on user role, I'd like to either display all related rows, or related rows where VisibleToAll = true. This code, feels like it should be in the controller, but I'm not sure how it would look, because as it stands, the controller (limmited version) looks like this: return View("TableADetailView", repos.GetTableA(id)); Would something like this be even work, and would it be bad what if my DataContext gets submitted, would that delete all the rows that have VisibleToAll == false? var tblA = repos.GetTableA(id); tblA.TableAChild1 = tblA.TableAChild1.Where(tmp => tmp.VisibleToAll == true); tblA.TableAChild2 = tblA.TableAChild2.Where(tmp => tmp.VisibleToAll == true); return View("TableADetailView", tblA); It would also be simple to add that logic to the RendarPartial call from the main view: <% Html.RenderPartial("TableAChild1", Model.TableAChild1.Where(tmp => tmp.VisibleToAll == true); %>

    Read the article

  • The Impossible Layout?

    - by Kyle K
    I'm beginning to think this is impossible, but thought I'd ask you guys. Basically it's a 2 column layout, but the "business" wants the following: -Always take up the entire browser window -Accommodate resizing of browser window -Left column will be fixed width, but that width should be flexible from page-to-page. -Left column has a region at the top with fixed height. -Left column has a bottom region. It should take up the remaining vertical space of browser window. If content is very large, it will have a scroll bar just for that region. -Right column should take up remaining horizontal space of browser window. -Right column has a region at the top with fixed height. -Right column has a bottom region. It should take up the remaining vertical space of browser window. If content is very large, it will have a scroll bar just for that region. I've tried everything...divs, floated, absolutely positioned, tables, divs in tables... Is this even possible? Here's an image of what it should look like: http://imgur.com/zk1jP.png

    Read the article

  • Stop SQL returning the same result twice in a JOIN

    - by nbs189
    I have joined together several tables to get data i want but since I am new to SQL i can't figure out how to stop data being returned more than once. her's the SQL statement; SELECT T.url, T.ID, S.status, S.ID, E.action, E.ID, E.timestamp FROM tracks T, status S, events E WHERE S.ID AND T.ID = E.ID ORDER BY E.timestamp DESC The data that is returned is something like this; +----------------------------------------------------------------+ | URL | ID | Status | ID | action | ID | timestamp | +----------------------------------------------------------------+ | T.1 | 4 | hello | 4 | has uploaded a track | 4 | time | | T.2 | 3 | bye | 3 | has some news | 3 | time | | t.1 | 4 | more | 4 | has some news | 4 | time | +----------------------------------------------------------------+ That's a very basic example but does outline what happens. If you look at the third row the URL is repeated when there is a different status. This is what I want to happen; +-------------------------------------------------------+ | URL or Status | ID | action | timestamp | +-------------------------------------------------------+ | T.1 | 4 | has uploaded a track | time | | hello | 3 | has some news | time | | bye | 4 | has some news | time | +-------------------------------------------------------+ Please notice that the the url (in this case the mock one is T.1) is shown when the action is has uploaded a track. This is very important. The action in the events table is inserted on trigger of a status or track insert. If a new track is inserted the action is 'has uploaded a track' and you guess what it is for a status. The ID and timestamp is also inserted into the events table at this point. Note: There are more tables that go into the query, 3 more in fact, but I have left them out for simplicity.

    Read the article

  • SQL left join with multiple rows into one row

    - by beardedd
    Basically, I have two tables, Table A contains the actual items that I care to get out, and Table B is used for language translations. So, for example, Table A contains the actual content. Anytime text is used within the table, instead of storing actual varchar values, ids are stored that relate back to text stored in Table B. This allows me to by adding a languageID column to Table B, have multiple translations for the same row in the database. Example: Table A Title (int) Description (int) Other Data.... Table B TextID (int) - This is the column whose value is stored in other tables LanguageID (int) Text (varchar) My question is more a call for suggestions on how to best handle this. Ideally I want a query that I can use to select from the table, and get the text as opposed to the ids of the text out of the table. Currently when I have two text items in the table this is what I do: SELECT C.ID, C.Title, D.Text AS Description FROM (SELECT A.ID, A.Description, B.Text AS Title FROM TableA A, TranslationsTable B WHERE A.Title = B.TextID AND B.LanguaugeID = 1) C LEFT JOIN TranslationsTable D ON C.Description = D.TextID AND D.LanguaugeID = 1 This query gives me the row from Table A I am looking for (using where statements in the inner select statement) with the actual text based on the language ID used instead of the text ids. This works fine when I am only using one or two text items that need to be translated, but adding a third item or more, it starts to get really messy - essentially another left join on top of the example. Any suggestions on a better query, or at least a good way to handle 3 or more text items in a single row?

    Read the article

  • ExecuteNonQuery: Connection property has not been initialized

    - by 20151012
    I get an error in my code: The error point at dbcom.ExecuteNonQuery();. Code(connection) public admin_addemp() { InitializeComponent(); con = new OleDbConnection(@"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=|DataDirectory|\EMPS.accdb;Persist Security Info=True"); con.Open(); ds.Tables.Add(dt); ds.Tables.Add(dt2); } Code (Save button) private void save_btn_Click(object sender, EventArgs e) { OleDbCommand check = new OleDbCommand(); check.Connection = con; check.CommandText = "SELECT COUNT(*) FROM employeeDB WHERE ([EmployeeName]=?)"; check.Parameters.AddWithValue("@EmployeeName", tb_name.Text); if (Convert.ToInt32(check.ExecuteScalar()) == 0) { dbcom2 = new OleDbCommand("SELECT EmployeeID FROM employeeDB WHERE EmployeeID='" + tb_id.Text + "'", con); OleDbParameter param = new OleDbParameter(); param.ParameterName = tb_id.Text; dbcom.Parameters.Add(param); OleDbDataReader read = dbcom2.ExecuteReader(); if (read.HasRows) { MessageBox.Show("The Employee ID '" + tb_id.Text + "' already exist. Please choose another Employee ID."); read.Dispose(); read.Close(); } else { string q = "INSERT INTO employeeDB (EmployeeID, EmployeeName, IC, Address, State, " + " Postcode, DateHired, Phone, ManagerName) VALUES ('" + tb_id.Text + "', " + " '" + tb_name.Text + "', '" + tb_ic.Text + "', '" + tb_add1.Text + "', '" + cb_state.Text + "', " + " '" + tb_postcode.Text + "', '" + dateTimePicker1.Value + "', '" + tb_hp.Text + "', '" + cb_manager.Text + "')"; dbcom = new OleDbCommand(q, con); dbcom.ExecuteNonQuery(); MessageBox.Show("New Employee '" + tb_name.Text + "'- Successfuly Added."); } } else { MessageBox.Show("Employee name '" + tb_name.Text + "' already added into the database"); } ds.Dispose(); } I'm using Microsoft Access 2010 as my database and this is stand alone system. Please help me.

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >