Search Results

Search found 16059 results on 643 pages for 'global temp tables'.

Page 468/643 | < Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >

  • Hot deploy on Heroku with no downtime

    - by zetarun
    A bad side of pushing to Heroku is that I must push the code (and the server restarts automatically) before running my db migrations. This can obviously cause some 500 errors on users navigating the website having the new code without the new tables/attributes: the solution proposed by Heroku is to use the maintenance mode, but I want a way with no downside letting my webapp running everytime! Is there a way? For example with Capistrano: I prepare the code to deploy in a new dir I run (backward) migrations and the old code continue to work perfectly I swith mongrel instance to the new dir and restart the server ...and I have no downtime!

    Read the article

  • How to serve a View as CSV in ASP.NET Web Forms

    - by ChessWhiz
    Hi, I have a MS SQL view that I want to make available as a CSV download in my ASPNET Web Forms app. I am using Entity Framework for other views and tables in the project. What's the best way to enable this download? I could add a HyperLink whose click handler iterates over the view, writes its CSV form to the disk, and then serves that file. However, I'd prefer not to write to the disk if it can be avoided, and that involves iteration code that may be avoided with some other solution. Any ideas?

    Read the article

  • checking if records exists in DB, in single step or 2 steps?

    - by Sinan
    Suppose you want to get a record from database which returns a large data and requires multiple joins. So my question would be is it better to use a single query to check if data exists and get the result if it exists. Or do a more simple query to check if data exists then id record exists, query once again to get the result knowing that it exists. Example: 3 tables a, b and ab(junction table) select * from from a, b, ab where condition and condition and condition and condition etc... or select id from a, b ab where condition then if exists do the query above. So I don't know if there is any reason to do the second. Any ideas how this affects DB performance or does it matter at all?

    Read the article

  • ajaxtabcontainer with button postback problem

    - by yousof
    I have a dropdownlist in my web page and two command buttons and a tabcontainer. The tabcontainer does not appear after the load of page according to my code. Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load If Not IsPostBack Then TabContainer1.Visible = False If CtvAct.GetRecords("Fill_RequestTypeTb") = True Then ReqTypeCmbo.DataSource = CtvAct.MainDataset.Tables("tbOLRequestType").DefaultView ReqTypeCmbo.DataTextField = "RequestTypeName" ReqTypeCmbo.DataValueField = "RequestTypeId" ReqTypeCmbo.DataBind() Dim itm As New ListItem itm.Text = "-- ??? ??? ????? --" itm.Value = "-1" itm.Selected = True ReqTypeCmbo.Items.Insert(0, itm) ReqTypeCmbo.SelectedIndex = 0 End If End If End Sub Protected Sub PrntCmd_Click(ByVal sender As Object, ByVal e As EventArgs) Handles PrntCmd.Click TextBox6.Text = "gggg" End Sub If I press any button after page load the button work very selecting any item from dropdownlist make the tabcontainer appear but after the tabcontainer appear, the buttons does not work (postback) how can I solve these problems

    Read the article

  • Can this MySQL subquery be optimised?

    - by Dan
    I have two tables, news and news_views. Every time an article is viewed, the news id, IP address and date is recorded in news_views. I'm using a query with a subquery to fetch the most viewed titles from news, by getting the total count of views in the last 24 hours for each one. It works fine except that it takes between 5-10 seconds to run, presumably because there's hundreds of thousands of rows in news_views and it has to go through the entire table before it can finish. The query is as follows, is there any way at all it can be improved? SELECT n.title , nv.views FROM news n LEFT JOIN ( SELECT news_id , count( DISTINCT ip ) AS views FROM news_views WHERE datetime >= SUBDATE(now(), INTERVAL 24 HOUR) GROUP BY news_id ) AS nv ON nv.news_id = n.id ORDER BY views DESC LIMIT 15

    Read the article

  • Backup AWS Dynamodb to S3

    - by Ali
    It has been suggested on Amazon docs http://aws.amazon.com/dynamodb/ among other places, that you can backup your dynamodb tables using Elastic Map Reduce, I have a general understanding of how this could work but I couldn't find any guides or tutorials on this, So my question is how can I automate dynamodb backups (using EMR)? So far, I think I need to create a "streaming" job with a map function that reads the data from dynamodb and a reduce that writes it to S3 and I believe these could be written in Python (or java or a few other languages). Any comments, clarifications, code samples, corrections are appreciated.

    Read the article

  • updating batches of data

    - by gaponte69
    I am using GridView in asp .net and editing data with edit command field property (as we know after updating the edited row, we automatically update the database), and I want to use transactions (with begin to commit statement - including rollback) to commit this update query in database, after clicking in some button (after some events for example), not automatically to insert or update the edited data from grid directly to the DB...so I want to save them somewhere temporary (even many edited rows - not just one row) and then to confirm the transaction - to update the real tables in database... Any suggestions are welcomed... I've used some good links, but very helpful, like: http://www.asp.net/learn/data-access/tutorial-63-cs.aspx http://www.asp.net/learn/data-access/tutorial-66-cs.aspx etc...

    Read the article

  • Monitor database table for external changes from within Rails application

    - by jhwist
    I'm integrating some non-rails-model tables in my Rails application. Everything works out very nicely, the way I set up the model is: class Change < ActiveRecord::Base establish_connection(ActiveRecord::Base.configurations["otherdb_#{RAILS_ENV}"]) set_table_name "change" end This way I can use the Change model for all existing records with find etc. Now I'd like to run some sort of notification, when a record is added to the table. Since the model never gets created via Change.new and Change.save using ActiveRecord::Observer is not an option. Is there any way I can get some of my Rails code to be executed, whenever a new record is added? I looked at delayed_job but can't quite get my head around, how to set that up. I imagine it evolves around a cron-job, that selects all rows that where created since the job last ran and then calls the respective Rails code for each row.

    Read the article

  • How to avoid timestamp issue in a long query?

    - by pingi
    Hi, I have the following 2 tables: items: id int primary key bla text events: id_items int num int when timestamp without time zone ble text composite primary key: id_items, num and want to select to each item the most recent event (the newest 'when'). I wrote an request, but I don't know if it could be written more efficiently. Also on PostgreSQL there is a issue with comparing Timestamp objects: 2010-05-08T10:00:00.123 == 2010-05-08T10:00:00.321 so I select with 'MAX(num)' Any thoughts how to make it better? Thanks. SELECT i.*, ea.* FROM items AS i JOIN ( SELECT t.s AS t_s, t.c AS t_c, max(e.num) AS o FROM events AS e JOIN ( SELECT DISTINCT id_item AS s, MAX(when) AS c FROM events GROUP BY s ORDER BY c ) AS t ON t.s = e.id_item AND e.when = t.c GROUP BY t.s, t.c ) AS tt ON tt.t_s = i.id JOIN events AS ea ON ea.id_item = tt.t_s AND ea.cas = tt.t_c AND ea.num = tt.o;

    Read the article

  • List of all index & index columns in SQL Server DB

    - by Anton Gogolev
    How do I get a list of all index & index columns in SQL Server 2005+? The closest I could get is: select s.name, t.name, i.name, c.name from sys.tables t inner join sys.schemas s on t.schema_id = s.schema_id inner join sys.indexes i on i.object_id = t.object_id inner join sys.index_columns ic on ic.object_id = t.object_id inner join sys.columns c on c.object_id = t.object_id and ic.column_id = c.column_id where i.index_id > 0 and i.type in (1, 2) -- clustered & nonclustered only and i.is_primary_key = 0 -- do not include PK indexes and i.is_unique_constraint = 0 -- do not include UQ and i.is_disabled = 0 and i.is_hypothetical = 0 and ic.key_ordinal > 0 order by ic.key_ordinal which is not exactly what I want. What I want is to list all user-defined indexes (which means no indexes which support unique constraints & primary keys) with all columns (ordered by how do they apper in index definition) plus as much metadata as possible.

    Read the article

  • Using Entity Framework, how do I specify a sort on a navagation property?

    - by Jared
    I have two tables: [Category], [Item]. They are connected by a join table: [CategoryAndItem]. It has two primary key fields: [CategoryKey], [ItemKey]. Foreign keys exist appropriately and Entity has no problem pulling this in and creating the correct navigation properties that connect the entity objects. Basically each category can have multiple items, and items can be in multiple categories. The problem is that the order of items is specified per category, so that a particular item might be third in one category but fifth in another. In the past, I have added a [Sequence] field to the join table and modified the stored procedure to handle it. But since Entity is replacing my stored procedures, I need to figure out how to make Entity handle the sequence. Any suggestions?

    Read the article

  • Read tab delimited text file into MySQL table with PHP

    - by Simon S
    I am trying to read in a series of tab delimited text files into existing MySQL tables. The code I have is quite simple: $lines = file("import/file_to_import.txt"); foreach ($lines as $line_num => $line) { if($line_num > 1) { $arr = explode("\t", $line); $sql = sprintf("INSERT INTO my_table VALUES('%s', '%s', '%s', %s, %s);", trim((string)$arr[0]), trim((string)$arr[1]), trim((string)$arr[2]), trim((string)$arr[3]), trim((string)$arr[4])); mysql_query($sql, $database) or die(mysql_error()); } } But no matter what I do (hence the casting before each variable in the sprintf statement) I get the "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 1" error. I echo out the code, paste it into a MySQL editor and it runs fine, it just won't execute from the PHP script. What am I doing wrong?? Si

    Read the article

  • How to select DISTINCT rows without having the ORDER BY field selected

    - by JannieT
    So I have two tables students (PK sID) and mentors (PK pID). This query SELECT s.pID FROM students s JOIN mentors m ON s.pID = m.pID WHERE m.tags LIKE '%a%' ORDER BY s.sID DESC; delivers this result pID ------------- 9 9 3 9 3 9 9 9 10 9 3 10 etc... I am trying to get a list of distinct mentor ID's with this ordering so I am looking for the SQL to produce pID ------------- 9 3 10 If I simply insert a DISTINCT in the SELECT clause I get an unexpected result of 10, 9, 3 (wrong order). Any help much appreciated.

    Read the article

  • max count with joins

    - by trixet
    I have 3 tables: users: Id Login 1 John 2 Bill 3 Jim computers: Id Name 1 Computer1 2 Computer2 3 Computer3 4 Computer4 5 Computer5 sessions: UserId ComputerId Minutes 1 2 47 2 1 32 1 4 15 2 5 5 1 2 7 1 1 40 2 5 31 I would like to display this resulting table: Login Total_sess Total_min Most_freq_computer Sess_on_most_freq Min_on_most_freq John 4 109 Computer2 2 54 Bill 3 68 Computer5 2 36 Jim - - - - - Myself I can only cover first 3 columns with: SELECT Login, COUNT(sessions.UserId), SUM(Minutes) FROM users LEFT JOIN sessions ON users.Id = sessions.UserId GROUP BY users.Id And some kind of other columns with: SELECT main.* FROM (SELECT UserId, ComputerId, COUNT(*) AS cnt ,SUM(Minutes) FROM sessions GROUP BY UserId, ComputerId) AS main INNER JOIN ( SELECT ComputerId, MAX(cnt) AS maxCnt FROM ( SELECT ComputerId, UserId, COUNT(*) AS cnt FROM sessions GROUP BY ComputerId, UserId ) AS Counts GROUP BY ComputerId) AS maxes ON main.ComputerId = maxes.ComputerId AND main.cnt = maxes.maxCnt But I need to get whole resulting table in one query. I feel I'm doing something completely wrong. Need help.

    Read the article

  • SQL Server 2000 DTS Package Failing with "The number of failing rows exceeds the maximum specified"

    - by Scott McCormick
    I have inherited a SQL Server 2000 DTS package that migrates data from SQL Server to Oracle. This package moves about 20 tables' data to Oracle every night with no transformations, and it is then transformed by a set of SPs and used by a GIS application. Twice this week, during the migration between SQL Server and Oracle, the package has failed with "The number of failing rows exceeds the maximum specified". It has failed on a different table each time, though. Each time it's failed, we've rerun the process the next morning and it has worked. Because the process works the second time it's run, it makes me think the data is being changed by someone or something between the initial failure and our successful second run. I would like to change the DTS package to log the failing rows in a text document so we can compare them later. Can someone help me with that? I can't seem to figure that part out. Scott

    Read the article

  • Quicken like Windows Forms application

    - by WinFXGuy
    Hi All, I need to build a quicken like application, where data needs to be secure. I don't see any database being used by Quicken. I could use XML, MDF or Access database, but data is not secure in the tables. What is the best option? How does Quicken handle it? My application may also have document attachments as well. The functionality of this application is similar to quicken but not an accounting/financial in functionality. Thanks a bunch!

    Read the article

  • Stored Procedure with ALTER TABLE

    - by psayre23
    I have a need to sync auto_increment fields between two tables in different databases on the same MySQL server. The hope was to create a stored procedure where the permissions of the admin would let the web user run ALTER TABLE [db1].[table] AUTO_INCREMENT = [num]; without giving it permissions (That just smells of SQL injection). My problem is I'm receiving errors when creating the store procedure. Is this something that is not allowed by MySQL? DROP PROCEDURE IF EXISTS sync_auto_increment; CREATE PROCEDURE set_auto_increment (tableName VARCHAR(64), inc INT) BEGIN ALTER TABLE tableName AUTO_INCREMENT = inc; END;

    Read the article

  • Major performance difference between two Oracle database instances

    - by jrdioko
    I am working with two instances of an Oracle database, call them one and two. two is running on better hardware (hard disk, memory, CPU) than one, and two is one minor version behind one in terms of Oracle version (both are 11g). Both have the exact same table table_name with exactly the same indexes defined. I load 500,000 identical rows into table_name on both instances. I then run, on both instances: delete from table_name; This command takes 30 seconds to complete on one and 40 minutes to complete on two. Doing INSERTs and UPDATEs on the two tables has similar performance differences. Does anyone have any suggestions on what could have such a drastic impact on performance between the two databases?

    Read the article

  • How to create a view of table that contains a timestamp column?

    - by Matt Faus
    This question is an extension of a previous one I have asked. I have a table (2014_05_31_transformed.Video) with a schema that looks like this. I have put up the JSON returned by the BigQuery API describing it's schema in this gist. I am trying to create a view against this table with an API call that looks like this: { 'view': { 'query': u 'SELECT deleted_mod_time FROM [2014_05_31_transformed.Video]' }, 'tableReference': { 'datasetId': 'latest_transformed', 'tableId': u 'Video', 'projectId': 'redacted' } } But, the BigQuery API is returning this error: HttpError: https://www.googleapis.com/bigquery/v2/projects/124072386181/datasets/latest_transformed/tables?alt=json returned "Invalid field name "deleted_mod_time.usec". Fields must contain only letters, numbers, and underscores, start with a letter or underscore, and be at most 128 characters long." The schema that the BigQuery API does not make any distinction between a TIMESTAMP data type and a regular nullable INTEGER data type, so I can't think of a way to programmatically correct this problem. Is there anything I can do, or is this a bug with BigQuery's view implementation?

    Read the article

  • Do i need to insert one fake row in database ?

    - by Ankit Rathod
    Hello, I have few tables like example. Users Books UsersBookPurchase UID BookId UserId UName Name BookId Password Price Email This is fine. I am having my own login system but i am also using some 3rd party to validate like OpenID or facebook Authetication. My question is if the user is able to log in successfully using OpenID or facebook Authentication, what steps do i need to do i.e do i have to insert one fake row in Users table because if i do not insert how will integrity be maintained. I mean what user id should i insert in UsersBookPurchase when the person who has logged in using Facebook Authentication has made a purchase because the UserId is reference key from Users table. Please give me a high level overview of what i need to do because this is fairly common scenario. Thanks in advance :)

    Read the article

  • Dynamic table design (common lookup table), need a nice query to get the values

    - by Swoosh
    sql2005 This is my simplified example: (in reality there are 40+ tables in here, I only showed 2) I got a table called tb_modules, with 3 columns (id, description, tablename as varchar): 1, UserType, tb_usertype 2, Religion, tb_religion (Last column is actually the name of a different table) I got an other table that looks like this: tb_value (columns:id, tb_modules_ID, usertype_OR_religion_ID) values: 1111, 1, 45 1112, 1, 55 1113, 2, 123 1114, 2, 234 so, I mean 45, 55, 123, 234 are usertype OR religion ID's (45, 55 usertype, 123, 234 religion ID`s) Don't judge, I didn't design the database Question How can I make a select, showing * from tb_value, plus one column That one column would be TITLE from the tb_usertype or RELIGIONNAME from the tb_religion table I would like to make a general thing. Was thinking initially about maybe a SQL function that returns a string, but I think I would need dynamic SQL, which is not ok in a function. Anyone a better idea ?

    Read the article

  • For SQL select returning more than 1 value, how are they sorted when Id is GUID?

    - by Chris F
    I'm wondering how MSSQL orders data that is returned from a query and the Id columns of the respective tables are all of type uniqueidentifier. I'm using NHibernate GuidComb when creating all of the GUIDs and do things like: Sheet sheet = sheetRepository.Get(_SheetGuid_); // has many lines items IList<SheetLineItem> lineItems = sheet.LineItems; I'm just trying to figure out how they'll be ordered when I do something like: foreach (SheetLineItem lineItem in lineItems) I can't see to find a good article on the way GUIDs are compared by SQL when being ordered, if that's what's happening.

    Read the article

  • ASCII in Windows XP and Ubuntu Linux

    - by Mikey D
    I've made a program in MVSC++ which outputs memory contents (in ASCII). The ASCII I see in windows console seem to match what I see in various ASCII tables (smiley, diamond, club, right arrow etc). This program needs to compile under Linux (which is does), but the ASCII output looks completely different. A few symbols are the same but the rest are so different. Is there any way to change how terminal displays ASCII code? EDIT: The program executes correctly, it's just the ASCII that is being displayed differently.

    Read the article

  • Kohana 3 ORM limitations

    - by yoda
    Hi, What are the limitations of Kohana 3 ORM regarding table relationships? I'm trying to modify the build-in Auth module in order to accept groups of users in adition, having now the following tables : groups groups_users roles roles_groups users user_tokens By default, this module is set to work without groups, and linking the users and roles using a third table named roles_users, but I need to add groups to it. I'm linking, as you can see by the names, the groups to users and the roles to groups, but I'm failing building the ORM code for it, so that's pretty much the question here, if ORM is limited to 2 relationships or if it can handle 3 in this case. Cheers!

    Read the article

  • A quick over view of facebook's db?

    - by Matt
    Hey guys I find it hard to believe that Facebook uses simple sql, surely it would use some other method but lets assume for now it does use sql how would the code assimilating the 'wall' work? Lets say that there is three tables (just for the example) Friends: id (entry key) - uid(your id) - fid (your mates' id) Wall:id (entry key) - username - comment - time - commentcount comments: id (entry key) - wid (wall id (original comment)) - reply - time Lets forget about the like part and report etc, as well as mod things (ip, ban etc.) How would this work? Select wall.id, wall.username, wall.comment, wall.time, wall.commentcount, comments.wid, comments.reply, comments.time FROM wall inner join comments ON wall.id=comments.wid ORDER BY wall.time; That's your own wall but how do they get friend's? A heap of unions?

    Read the article

< Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >