Search Results

Search found 54131 results on 2166 pages for 'database project'.

Page 430/2166 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • Autmatically create table on MySQL server based on date?

    - by Anthony
    Is there an equivalent to cron for MySQL? I have a PHP script that queries a table based on the month and year, like: SELECT * FROM data_2010_1 What I have been doing until now is, every time the script executes it does a query for the table, and if it exists, does the work, if it doesn't it creates the table. I was wondering if I can just set something up on the MySQL server itself that will create the table (based on a default table) at the stroke of midnight on the first of the month. Update Based on the comments I've gotten, I'm thinking this isn't the best way to achieve my goal. So here's two more questions: If I have a table with thousands of rows added monthly, is this potentially a drag on resources? If so, what is the best way to partition this table, since the above is verboten? What are the potential problems with my home-grown method I originally thought up?

    Read the article

  • Performance of VIEW vs. SQL statement

    - by Matt W.
    I have a query that goes something like the following: select <field list> from <table list> where <join conditions> and <condition list> and PrimaryKey in (select PrimaryKey from <table list> where <join list> and <condition list>) and PrimaryKey not in (select PrimaryKey from <table list> where <join list> and <condition list>) The sub-select queries both have multiple sub-select queries of their own that I'm not showing so as not to clutter the statement. One of the developers on my team thinks a view would be better. I disagree in that the SQL statement uses variables passed in by the program (based on the user's login Id). Are there any hard and fast rules on when a view should be used vs. using a SQL statement? What kind of performance gain issues are there in running SQL statements on their own against regular tables vs. against views. (Note that all the joins / where conditions are against indexed columns, so that shouldn't be an issue.) EDIT for clarification... Here's the query I'm working with: select obj_id from object where obj_id in( (select distinct(sec_id) from security where sec_type_id = 494 and ( (sec_usergroup_id = 3278 and sec_usergroup_type_id = 230) or (sec_usergroup_id in (select ug_gi_id from user_group where ug_ui_id = 3278) and sec_usergroup_type_id = 231) ) and sec_obj_id in ( select obj_id from object where obj_ot_id in (select of_ot_id from obj_form left outer join obj_type on ot_id = of_ot_id where ot_app_id = 87 and of_id in (select sec_obj_id from security where sec_type_id = 493 and ( (sec_usergroup_id = 3278 and sec_usergroup_type_id = 230) or (sec_usergroup_id in (select ug_gi_id from user_group where ug_ui_id = 3278) and sec_usergroup_type_id = 231) ) ) and of_usage_type_id = 131 ) ) ) ) or (obj_ot_id in (select of_ot_id from obj_form left outer join obj_type on ot_id = of_ot_id where ot_app_id = 87 and of_id in (select sec_obj_id from security where sec_type_id = 493 and ( (sec_usergroup_id = 3278 and sec_usergroup_type_id = 230) or (sec_usergroup_id in (select ug_gi_id from user_group where ug_ui_id = 3278) and sec_usergroup_type_id = 231) ) ) and of_usage_type_id = 131 ) and obj_id not in (select sec_obj_id from security where sec_type_id = 494) )

    Read the article

  • How to display SUM fields from a detailed table in a master table

    - by max
    What is the best approach to display the summery of DETAILED.Fields in its master table? E.g. I have a master table called 'BILL' with all the bill related data and a detailed table ('BILL_DETAIL') with the bill detailed related data, like NAME, PRICE, TAX, ... Now I want to list all BILLS, without the details, but with the sum of the PRICE and TAX stored in the detail table. Here is a simplified schema of that tables: TABLE BILL ---------- - ID - NAME - ADDRESS - ... TABLE BILL_DETAIL ----------------- - ID - BILLID - PORDUCT_NAME - PRICE - TAX - ... The retrieved table row should look like this: BILL.CUSTOMER_NAME, BILL.CUSTOMER_ADDRESS, sum(BILL_DETAIL.PRICE), sum(BILL.DETAIL.TAX), ... Any sugguestions?

    Read the article

  • Optimize a MySQL count each duplicate Query

    - by Onema
    I have the following query That gets the city name, city id, the region name, and a count of duplicate names for that record: SELECT Country_CA.City AS currentCity, Country_CA.CityID, globe_region.region_name, ( SELECT count(Country_CA.City) FROM Country_CA WHERE City LIKE currentCity ) as counter FROM Country_CA LEFT JOIN globe_region ON globe_region.region_id = Country_CA.RegionID AND globe_region.country_code = Country_CA.CountryCode ORDER BY City This example is for Canada, and the cities will be displayed on a dropdown list. There are a few towns in Canada, and in other countries, that have the same names. Therefore I want to know if there is more than one town with the same name region name will be appended to the town name. Region names are found in the globe_region table. Country_CA and globe_region look similar to this (I have changed a few things for visualization purposes) CREATE TABLE IF NOT EXISTS `Country_CA` ( `City` varchar(75) NOT NULL DEFAULT '', `RegionID` varchar(10) NOT NULL DEFAULT '', `CountryCode` varchar(10) NOT NULL DEFAULT '', `CityID` int(11) NOT NULL DEFAULT '0', PRIMARY KEY (`City`,`RegionID`), KEY `CityID` (`CityID`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; AND CREATE TABLE IF NOT EXISTS `globe_region` ( `country_code` char(2) COLLATE utf8_unicode_ci NOT NULL, `region_code` char(2) COLLATE utf8_unicode_ci NOT NULL, `region_name` varchar(50) COLLATE utf8_unicode_ci NOT NULL, PRIMARY KEY (`country_code`,`region_code`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; The query on the top does exactly what I want it to do, but It takes way too long to generate a list for 5000 records. I would like to know if there is a way to optimize the sub-query in order to obtain the same results faster. the results should look like this City CityID region_name counter sheraton 2349269 British Columbia 1 sherbrooke 2349270 Quebec 2 sherbrooke 2349271 Nova Scotia 2 shere 2349273 British Columbia 1 sherridon 2349274 Manitoba 1

    Read the article

  • How do I perform 'WHERE' on groups of rows?

    - by Drew
    I have a table, which looks like: +-----------+----------+ + person_id + group_id + +-----------+----------+ + 1 + 10 + + 1 + 20 + + 1 + 30 + + 2 + 10 + + 2 + 20 + + 3 + 10 + +-----------+----------+ I need a query such that only person_ids with groups 10 AND 20 AND 30 are returned (only person_id: 1). I am not sure how to do this, as from what I can see it would require me to group the rows by person_id and then select the rows which contain all group_ids. I'm looking for something which will preserve the use of keys without resorting to string operations on group_concat() or such.

    Read the article

  • Maximum Row in DBMS

    - by Am1rr3zA
    Is there any limit to maximum row of table in DBMS (specially MySQL)? I want create table for saving logfile and it's row increase so fast I want know what shoud I do to prevent any problem.

    Read the article

  • Getting wierd issue with TO_NUMBER function in Oracle

    - by Fazal
    I have been getting an intermittent issue when executing to_number function in the where clause on a varchar2 column if number of records exceed a certain number n. I used n as there is no exact number of records on which it happens. On one DB it happens after n was 1 million on another when it was 0.1. million. E.g. I have a table with 10 million records say Table Country which has field1 varchar2 containing numberic data and Id If I do a query as an example select * from country where to_number(field1) = 23 and id 1 and id < 100000 This works But if i do the query select * from country where to_number(field1) = 23 and id 1 and id < 100001 It fails saying invalid number Next I try the query select * from country where to_number(field1) = 23 and id 2 and id < 100001 It works again As I only got invalid number it was confusing, but in the log file it said Memory Notification: Library Cache Object loaded into SGA Heap size 3823K exceeds notification threshold (2048K) KGL object name :with sqlplan as ( select c006 object_owner, c007 object_type,c008 object_name from htmldb_collections where COLLECTION_NAME='HTMLDB_QUERY_PLAN' and c007 in ('TABLE','INDEX','MATERIALIZED VIEW','INDEX (UNIQUE)')), ws_schemas as( select schema from wwv_flow_company_schemas where security_group_id = :flow_security_group_id), t as( select s.object_owner table_owner,s.object_name table_name, d.OBJECT_ID from sqlplan s,sys.dba_objects d It seems its related to SGA size, but google did not give me much help on this. Does anyone have any idea about this issue with TO_NUMBER or oracle functions for large data?

    Read the article

  • Static DB Provider in ASP.NET MVC Causing Memory Leak

    - by user364685
    Hi, I have got an app I'm going to write in ASP.NET MVC and I want to create a DatabaseFactory object something like this:- public class DatabaseFactory { private string dbConn get { return <gets from config file>; } public IDatabaseTableObject GetDatabaseTable() { IDatabaseTableObject databaseTableObject = new SQLDatabaseObject(dbConn); return databaseTableObject; } } and this works fine, but I obviously have to instantiate the DatabaseFactory in every controller that needs it. If I made this static, so I could, in theory just call DatabaseFactory.GetDatabaseTable() it would cause a memory leak, wouldn't it?

    Read the article

  • PHP mySQL - replace some string inside string

    - by apis17
    i want to replace ALL comma , into ,<space> in all address table in my mysql table. For example, +----------------+----------------+ | Name | Address | +----------------+----------------+ | Someone name | A1,Street Name | +----------------+----------------+ Into +----------------+----------------+ | Name | Address | +----------------+----------------+ | Someone name | A1, Street Name| +----------------+----------------+ Thanks in advance.

    Read the article

  • Dummies guide to locking in innodb

    - by ming yeow
    The typical documentation on locking in innodb is way too confusing. I think it will be of great value to have a "dummies guide to innodb locking" I will start, and I will gather all responses as a wiki: The column needs to be indexed before row level locking applies. EXAMPLE: delete row where column1=10; will lock up the table unless column1 is indexed

    Read the article

  • Fact table with multiple facts

    - by Jeff Meatball Yang
    I have a dimension (SiteItem) has two important facts: perUserClicks perBrowserClicks however, within this dimension, I have groups of dimensions based on an attribute column (let's call the groups AboveFoldItems, LeftNavItems, OnTheFlyItems, etc.) each have more facts that are specific to that group: AboveFoldItems: eyeTime, loadTime LeftNavItems: mouseOverTime OnTheFlyItems: doesn't have any extra, but may in the future Is the following fact table schema ok? DateKey SessionKey SiteItemKey perUserClicks perBrowserClicks eyeTime loadTime mouseOverTime It seems a little wasteful since only some columns pertain to some dimension keys (the irrelevant facts are left NULL). But... this seems like it would be a common problem, so there should be a common solution for this, right?

    Read the article

  • When I zip up my demo FlashDevelop project..why does it break?

    - by Ryan
    I built an AS3 image gallery using FlashDevelop. Before I zip up the application, I can run the image gallery in my browser by simply opening the index.html for the project. Everything works perfectly. I then zip up the project as proj-0.1.2.zip using winrar. I then unzip this newly created zip and try to load the application using the project index.html like above. The gallery doesn't function properly. From seeing what happens, it appears as though the image metadata is not present(but I'm not sure, see below). There are other applications as well that are broken. Videos don't load. If an application doesn't depend on any external assets then everything looks fine. Another thing..If I then build the FlashDevelop project and republish the swf..then it works in the index.html like I want. What is going on here? I want people to be able to fire up my demo apps out of the box by just running the index.html. If that doesn't always work and they have to figure out that they need to rebuild the SWF then that's pretty bad.

    Read the article

  • How can I check a type's dependents order to drop them and replace/modify the initial type?

    - by pctroll
    I tried to modify a type using the following code and it gave me the error code: 'ORA-02303'. I don't know much about Oracle or PL/SQL but I need to solve this; so I'd appreciate any further help with this. Thanks in advance. The code is just an example. But then again, I need to check its dependents first. create or replace type A as object ( x_ number, y_ varchar2(10), member procedure to_upper ); /

    Read the article

  • WPF: How to bind and update display with DataContext

    - by Am
    I'm trying to do the following thing: I have a TabControl with several tabs. Each TabControlItem.Content points to PersonDetails which is a UserControl Each BookDetails has a dependency property called IsEditMode I want a control outside of the TabControl , named ToggleEditButton, to be updated whenever the selected tab changes. I thought I could do this by changing the ToggleEditButton data context, by it doesn't seem to work (but I'm new to WPF so I might way off) The code changing the data context: private void tabControl1_SelectionChanged(object sender, SelectionChangedEventArgs e) { if (e.Source is TabControl) { if (e.Source.Equals(tabControl1)) { if (tabControl1.SelectedItem is CloseableTabItem) { var tabItem = tabControl1.SelectedItem as CloseableTabItem; RibbonBook.DataContext = tabItem.Content as BookDetails; ribbonBar.SelectedTabItem = RibbonBook; } } } } The DependencyProperty under BookDetails: public static readonly DependencyProperty IsEditModeProperty = DependencyProperty.Register("IsEditMode", typeof (bool), typeof (BookDetails), new PropertyMetadata(true)); public bool IsEditMode { get { return (bool)GetValue(IsEditModeProperty); } set { SetValue(IsEditModeProperty, value); SetValue(IsViewModeProperty, !value); } } And the relevant XAML: <odc:RibbonTabItem Title="Book" Name="RibbonBook"> <odc:RibbonGroup Title="Details" Image="img/books2.png" IsDialogLauncherVisible="False"> <odc:RibbonToggleButton Content="Edit" Name="ToggleEditButton" odc:RibbonBar.MinSize="Medium" SmallImage="img/edit_16x16.png" LargeImage="img/edit_32x32.png" Click="Book_EditDetails" IsChecked="{Binding Path=IsEditMode, Mode=TwoWay}"/> ... There are two things I want to accomplish, Having the button reflect the IsEditMode for the visible tab, and have the button change the property value with no code behind (if posible) Any help would be greatly appriciated.

    Read the article

  • MSSQL 2005: Rename DB Server Instance Name?

    - by Code Sherpa
    Hi, Can somebody tell me how to rename the DB server instance name and a DB name in MSSQL 2005? Right Now I Have SERVER/OLDNAME -- oldnameDB I want to change the server instance and also change the db name. I have tried: EXEC sp_renamedb 'oldName', 'newName' and that has changed the dbname as it appers in the tree directory. But, when I do "select @@servername" it is the old name. Also, the MDF and LDF files are still the old name. How do change instance and db names as a clean sweep across the server? Thanks.

    Read the article

  • Creating an appropriate index for a frequently used query in SQL Server

    - by Slauma
    In my application I have two queries which will be quite frequently used. The Where clauses of these queries are the following: WHERE FieldA = @P1 AND (FieldB = @P2 OR FieldC = @P2) and WHERE FieldA = @P1 AND FieldB = @P2 P1 and P2 are parameters entered in the UI or coming from external datasources. FieldA is an int and highly on-unique, means: only two, three, four different values in a table with say 20000 rows FieldB is a varchar(20) and is "almost" unique, there will be only very few rows where FieldB might have the same value FieldC is a varchar(15) and also highly distinct, but not as much as FieldB FieldA and FieldB together are unique (but do not form my primary key, which is a simple auto-incrementing identity column with a clustered index) I'm wondering now what's the best way to define an index to speed up specifically these two queries. Shall I define one index with... FieldB (or better FieldC here?) FieldC (or better FieldB here?) FieldA ... or better two indices: FieldB FieldA and FieldC FieldA Or are there even other and better options? What's the best way and why? Thank you for suggestions in advance!

    Read the article

  • How to secure phpMyAdmin

    - by Andrei
    Hi, I have noticed that there are strange requests to my website trying to find phpmyadmin, like /phpmyadmin/ /pma/ etc. Now I have installed PMA on Ubuntu via apt and would like to access it via webaddress different from /phpmyadmin/. What can I do to change it? Thanks

    Read the article

  • MySQL: automatic rollback on transaction failure

    - by praksant
    Is there any way to set MySQL to rollback any transaction on first error/warning automatically? Now if everything goes well, it commits, but on failure it leaves transaction open and on another start of transaction it commits incomplete changes from failed transaction. (I'm executing queries from php, but i don't want to check in php for failure, as it would make more calls between mysql server and webserver.) Thank you

    Read the article

  • How to efficiently store and update binary data in Mongodb?

    - by Rocketman
    I am storing a large binary array within a document. I wish to continually add bytes to this array and sometimes change the value of existing bytes. I was looking for some $append_bytes and $replace_bytes type of modifiers but it appears that the best I can do is $push for arrays. It seems like this would be doable by performing seek-write type operations if I had access somehow to the underlying bson on disk, but it does not appear to me that there is anyway to do this in mongodb (and probably for good reason). If I were instead to just query this binary array, edit or add to it, and then update the document by rewriting the entire field, how costly will this be? Each binary array will be on the order of 1-2MB, and updates occur once every 5 minutes and across 1000s of documents. Worse, yet there is no easy way to spread these out (in time) and they will usually be happening close to one another on the 5 minute intervals. Does anyone have a good feel for how disastrous this will be? Seems like it would be problematic. An alternative would be to store this binary data as separate files on disk, implement a thread pool to efficiently manipulate the files on disk, and reference the filename from my mongodb document. (I'm using python and pymongo so I was looking at pytables). I'd prefer to avoid this though if possible. Is there any other alternative that I am overlooking here? Thanks in advnace.

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >