Search Results

Search found 28930 results on 1158 pages for 'sql ce'.

Page 597/1158 | < Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >

  • MySQL - Calculating a value from two columns in each row, using the result in a WHERE or HAVING clause

    - by taber
    I have a MySQL db schema like so: id flags views done ------------------------------- 1 2 20 0 2 66 100 0 3 25 40 0 4 30 60 0 ... thousands of rows ... I want to update all of the rows whose flags / views are = 0.2. First as a test I want to try to SELECT to see how many rows would actually get updated. I tried: SELECT flags/views AS poncho FROM mytable HAVING poncho 0.2 But this only returns like 2 rows and I know there should be a lot more than that. It seems like it's calculating the poncho value on all rows or something odd. What am I doing wrong? Thanks!

    Read the article

  • MsSQL 2005 query performance

    - by Max
    I have the following query: select ............. from //one table and about 20 left joins// where ( ( this_.driverName like 'blah*' or this_.renterName like 'blah*' ) or exists ( select this0__.id as y0_ from ThirdParty this0__ where this0__.name like 'blah*' and this0__.claim_id=this_.id ) ) order by this_.id asc And I have two environment: One with 175 000 records in table "this_" and second with 25 000 records in table "this_". This query works right on 175k database and it works smth about 2 seconds, but on base with 25k this query freezes. and if drop one the folloing item from where clause: ( this_.driverName like 'blah*' or this_.renterName like 'blah*' ) or exists ( select this0__.id as y0_ from ThirdParty this0__ where this0__.name like 'blah*' and this0__.claim_id=this_.id ) query runs normally. How can I to increase performance of this query?

    Read the article

  • One table, need multiple values from different rows/tuples

    - by WmasterJ
    I have tables like: 'profile_values' userID | fid | value -------+---------+------- 1 | 3 | [email protected] 1 | 45 | 203-234-2345 3 | 3 | [email protected] 1 | 45 | 123-456-7890 And: 'users' userID | name -------+------- 1 | joe 2 | jane 3 | jake I want to join them and have one row with two of the values like: 'profile_values' userID | name | email | phone -------+-------+----------------+-------------- 1 | joe | [email protected] | 203-234-2345 2 | jane | [email protected] | 123-456-7890 I have solved it but it feels clumsy and I want to know if there is a better way to do it. Meaning solutions that are either more readable or faster(optimized) or simply best-practice. Current solution: multiple tables selected, many conditional statements: SELECT u.userID AS memberid, u.name AS first_name, pv1.value AS fname, pv2.value as lname FROM users AS u, profile_values AS pv1, profile_values AS pv2, WHERE u.userID = pv1.userID AND pv1.fid = 3 AND u.userID = pv2.userID AND pv2.fid = 45; Thanks for the help!

    Read the article

  • Integer comparison as string

    - by J Pollack
    Hi I have an integer column and I want to find numbers that start with specific digits. For example they do match if I look for '123': 1234567 123456 1234 They do not match: 23456 112345 0123445 Is the only way to handle the task by converting the Integers into Strings before doing string comparison? Also I am using Postgre regexp_replace(text, pattern, replacement) on numbers which is very slow and inefficient way doing it. The case is that I have large amount of data to handle this way and I am looking for the most economical way doing this.

    Read the article

  • SSIS(sql server integration service) xml data flow

    - by swapna
    Hi, I have an xml file the content which i have to write to a Database table using ssis pacakge. I am using xml source nad oledb destination My issue now is this xml file generate multiple outputs .(event,produt,offer,form) etc. But i need to write all in one data row(more than one if 2 products are there for the event) in the database. But i do not know how to use this multiple outputs and make a single row for a event. I hav read numerous articles about this subject but not able to take a decision.what is the right way of doing this. 1) xml source ? (if i use this how do i merge the multiple outputs) 2) or a script task using xml objects read and write to the DB. or anything new ? Please provide me some solutions xml sample file * - ABc. 2009-06-07 2010-04-30 region test 1 contact - offertest product1 product1 187 * Thanks SNA

    Read the article

  • Configure database mail settings

    - by Paresh
    How can I configure database mail settings and send the mail from the database in Sharepoint created default database instance as i can not find where to configure the database mail settings from the management after login sa user.

    Read the article

  • Querying tables based on other column values

    - by blcArmadillo
    Is there a way to query different databases based on the value of a column in the query? Say for example you have the following columns: id part_id attr_id attr_value_ext attr_value_int You then run a query and if the attr_id is '1' is returns the attr_value_int column but if attr_id is greater than '1' it joins data from another table based on the attr_value_ext.

    Read the article

  • Failed to Kill Process in SQL 2008

    - by Andrea.Ko
    I have a process with the following information, and i execute the kill process to kill this id, and it return me "Only user processes can be killed." SPID:11 Status:BACKGROUND Login:sa HostName: . BlkBy: . DBName: SAFEMIG Command:CHECKPOINT Normally, all the session to login to this server, it should have a HostName which display our PC name, but this connection is with a dot, so not sure who is executing what process that have this connection. I execute "dbcc inputbuffer(11)" It return me"EventType= No Event, Parameters = 0 and EventInfo=Null" Appreciate for any help\advice on this problem!

    Read the article

  • MySQL index building performance

    - by Christian
    I tried to build an index over a two columns of a 30,000,000 entry database. I canceled the process after ~60hr as it didn't seem to work. For some reason MySQL takes only 22 mb ram instead of using the RAM fully. Is index building an operation that needs no Ram or is there some way to tell MySQL to use more RAM to be faster?

    Read the article

  • Wiki Database, is there one?

    - by Faiz
    I was searching the net for something like a wiki database, just like wikipedia but instead stores structured content, editable by users. What I was looking for was an online database accessible by everyone where people can design the schema and data with proper versioning of both schema and data. I couldn't find any such site. I am not sure if it is my search skills or if there really is no wiki database as of now. Does anyone out there know anything like this? I think there is a great potential for something like this. A possible example will be a website with a GUI for querying a MySQL DB where any website visitor can create DB objects and populate data.

    Read the article

  • How can I adjust the CommandTImeout in DbFit for long running queries?

    - by Ben Farmer
    Is there any way to increase the CommandTimeout for DbFit queries? I have a long running stored procedure that times out when running it in a DbFit Test. It's possible for the procedure to run for a really long time (processing millions of records) and would like to have DbFit wait until it's completed, even if it takes several minutes. We are using the latest version of FitSharp (downloaded it yesterday) and use the version of DbFit that is included with FitSharp.

    Read the article

  • PLPGSQL : How to return a record from function executed by INSERT/UPDATE rule?

    - by seas
    Do the following scheme for my database: create sequence data_sequence; create table data_table { id integer primary key; field varchar(100); }; create view data_view as select id, field from data_table; create function data_insert(_new data_view) returns data_view as $$declare _id integer; _result data_view%rowtype; begin _id := nextval('data_sequence'); insert into data_table(id, field) values(_id, _new.field); select * into _result from data_view where id = _id; return _result; end; $$ language plpgsql; create rule insert as on insert to data_view do instead select data_insert(new); Then type in psql: insert into data_view(field) values('abc'); Would like to see something like: id | field ----+--------- 1 | abc Instead see: data_insert ------------- (1, "abc") Is it possible to fix this somehow? Thanks for any ideas. Ultimate idea is to use this in other functions, so that I could obtain id of just inserted record without selecting for it from scratch. Something like: insert into data_view(field) values('abc') returning id into my_variable would be nice but doesn't work with error: ERROR: cannot perform INSERT RETURNING on relation "data_view" HINT: You need an unconditional ON INSERT DO INSTEAD rule with a RETURNING clause. I don't really understand that HINT. I use PostgreSQL 8.4.

    Read the article

  • Search select statement

    - by Nana
    I am creating a page which would have different field for the user to search from. e.g. search by: Grade: -dropdownlist1- Student name: -dropdownlist2- Student ID: -dropdownlist3- Lessons: -dropdownlist4- Year: -dropdownlist5- How do I write the select statement for this? Each dropdownlist would need a select statement which would extract out different data from the database. But, I want to write ONE select statement which can dynamically choose the dropdownlist options. Instead of writing many many select statement. Lets say; Grade: -dropdownlist1- ; default value(all) Student name: -dropdownlist2-; default value(all) Student ID: -dropdownlist3-; 0-100 is choosen Lessons: -dropdownlist4-; A-C is choosen Year: -dropdownlist5-; 2009 is choosen

    Read the article

  • MySQL table data transformation -- how can I dis-aggregate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • LINQ: Create persistable Associations in Code, Without Foreign Key

    - by Alex
    Hello, I know that I can create LINQ Associations without a Foreign Key. The problem is, I've been doing this by adding the [Association] attribute in the DBML file (same as through the designer), which will get erased again after I refresh my database (and reload the entire table structure). I know that there is the MyData.cs file (as part of the DBML) in which I can place my partial extensions etc. to domain objects (to persist even after I refresh the DBML), but I don't know how to create an association there?

    Read the article

  • Date arithmetic using integer values

    - by Dave Jarvis
    Problem String concatenation is slowing down a query: date(extract(YEAR FROM m.taken)||'-1-1') d1, date(extract(YEAR FROM m.taken)||'-1-31') d2 This is realized in code as part of a string, which follows (where the p_ variables are integers): date(extract(YEAR FROM m.taken)||''-'||p_month1||'-'||p_day1||''') d1, date(extract(YEAR FROM m.taken)||''-'||p_month2||'-'||p_day2||''') d2 This part of the query runs in 3.2 seconds with the dates, and 1.5 seconds without, leading me to believe there is ample room for improvement. Question What is a better way to create the date (presumably without concatenation)? Many thanks!

    Read the article

  • LIKE operator with $variable

    - by skarama
    This is my first question here and I hope it is simple enough to get a quick answer! Basically, I have the following code: $variable = curPageURL(); $query = 'SELECT * FROM `tablename` WHERE `columnname` LIKE '$variable' ; If I echo the $variable, it prints the current page's url( which is a javascript on my page) Ultimately, what I want, is to be able to make a search for which the search-term is the current page's url, with wildcards before and after. I am not sure if this is possible at all, or if I simply have a syntax error, because I get no errors, simply no result! I tried : $query = 'SELECT * FROM `tablename` WHERE `columnname` LIKE '"echo $variable" ' ; But again, I'm probably missing or using a misplaced ' " ; etc. Please tell me what I'm doing wrong!

    Read the article

  • Acquiring Table Lock in Database - Interview Question

    - by harigm
    One of my interview Questions, if multiple users across the world are accessing the application, in which it uses a Table which has a Primary Key as Auto Increment Field. The Question how can you prevent the other user getting the Same Primary key when the other user is executing? My answer was I will obtain the Lock on the table and I will make the user to wait Until that user is released with the Primary key. But the Question How do you acquire the Table lock programmatically and implement this? If there are 1000 users coming every minute to the application, if you explicity hold the lock on the table, then the application will become slower? How do you manage this? Please suggest the possible answers for the above question

    Read the article

  • How do I perform a batch insert in Django?

    - by Thierry Lam
    In mysql, you can insert multiple rows to a table in one query for n 0: INSERT INTO tbl_name (a,b,c) VALUES(1,2,3),(4,5,6),(7,8,9), ..., (n-2, n-1, n); Is there a way to achieve the above with Django queryset methods? Here's an example: values = [(1, 2, 3), (4, 5, 6), ...] for value in values: SomeModel.objects.create(first=value[0], second=value[1], third=value[2]) I believe the above is calling an insert query for each iteration of the for loop. I'm looking for a single query, is that possible in Django?

    Read the article

  • Why index_merge is not used here?

    - by user198729
    Setup: mysql> create table t(a integer unsigned,b integer unsigned); mysql> insert into t(a,b) values (1,2),(1,3),(2,4); mysql> create index i_t_a on t(a); mysql> create index i_t_b on t(b); mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 3 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Is there something I'm missing?

    Read the article

  • Trouble with LINQ databind to GridView and RowDataBound

    - by Michael
    Greetings all, I am working on redesigning my personal Web site using VS 2008 and have chosen to use LINQ to create by data-access layer. Part of my site will be a little app to help manage my budget better. My first LINQ query does successfully execute and display in a GridView, but when I try to use a RowDataBound event to work with the results and refine them a bit, I get the error: The type or namespace name 'var' could not be found (are you missing a using directive or an assembly reference?) This interesting part is, if I just try to put in a var s = "s"; anywhere else in the same file, I get the same error too. If I go to other files in the web project, var s = "s"; compiles fine. Here is the LINQ Query call: public static IQueryable pubGetRecentTransactions(int param_accountid) { clsDataContext db; db = new clsDataContext(); var query = from d in db.tblMoneyTransactions join p in db.tblMoneyTransactions on d.iParentTransID equals p.iTransID into dp from p in dp.DefaultIfEmpty() where d.iAccountID == param_accountid orderby d.dtTransDate descending, d.iTransID ascending select new { d.iTransID, d.dtTransDate, sTransDesc = p != null ? p.sTransDesc : d.sTransDesc, d.sTransMemo, d.mTransAmt, d.iCheckNum, d.iParentTransID, d.iReconciled, d.bIsTransfer }; return query; } protected void Page_Load(object sender, EventArgs e) { if (!this.IsPostBack) { this.prvLoadData(); } } internal void prvLoadData() { prvCtlGridTransactions.DataSource = clsMoneyTransactions.pubGetRecentTransactions(2); prvCtlGridTransactions.DataBind(); } protected void prvCtlGridTransactions_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { var datarow = e.Row.DataItem; var s = "s"; e.Row.Cells[0].Text = datarow.dtTransDate.ToShortDateString(); e.Row.Cells[1].Text = datarow.sTransDesc; e.Row.Cells[2].Text = datarow.mTransAmt.ToString("c"); e.Row.Cells[3].Text = datarow.iReconciled.ToString(); }//end if }//end RowDataBound My googling to date hasn't found a good answer, so I turn it over to this trusted community. I appreciate your time in assisting me.

    Read the article

  • Applying Domain Model on top of Linq2Sql entities

    - by Thomas
    I am trying to practice the model first approach and I am putting together a domain model. My requirement is pretty simple: UserSession can have multiple ShoppingCartItems. I should start off by saying that I am going to apply the domain model interfaces to Linq2Sql generated entities (using partial classes). My requirement translates into three database tables (UserSession, Product, ShoppingCartItem where ProductId and UserSessionId are foreign keys in the ShoppingCartItem table). Linq2Sql generates these entities for me. I know I shouldn't even be dealing with the database at this point but I think it is important to mention. The aggregate root is UserSession as a ShoppingCartItem can not exist without a UserSession but I am unclear on the rest. What about Product? It is defiently an entity but should it be associated to ShoppingCartItem? Here are a few suggestion (they might all be incorrect implementations): public interface IUserSession { public Guid Id { get; set; } public IList<IShoppingCartItem> ShoppingCartItems{ get; set; } } public interface IShoppingCartItem { public Guid UserSessionId { get; set; } public int ProductId { get; set; } } Another one would be: public interface IUserSession { public Guid Id { get; set; } public IList<IShoppingCartItem> ShoppingCartItems{ get; set; } } public interface IShoppingCartItem { public Guid UserSessionId { get; set; } public IProduct Product { get; set; } } A third one is: public interface IUserSession { public Guid Id { get; set; } public IList<IShoppingCartItemColletion> ShoppingCartItems{ get; set; } } public interface IShoppingCartItemColletion { public IUserSession UserSession { get; set; } public IProduct Product { get; set; } } public interface IProduct { public int ProductId { get; set; } } I have a feeling my mind is too tightly coupled with database models and tables which is making this hard to grasp. Anyone care to decouple?

    Read the article

< Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >