Search Results

Search found 31891 results on 1276 pages for 'database schema'.

Page 54/1276 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Storing user settings in table - how?

    - by Mdillion
    I have settings for the user about 200 settings, these include notice settings and tracing settings from user activities on objects. The problem is how to store it in the DB? Should each setting be a row or a column? If colunm then table will have 200 colunms. If row then about 3 colunms but 200 rows per user x even 10 million users = not good. So how else can i store all these settings? NOTE: these settings are a mix of text entry and FK lookups to other tables. Thanks.

    Read the article

  • Which is a better design pattern for a database wrapper: Save as you go or Save when your done?

    - by izuriel
    I know this is probably a bad way to ask this question. I was unable to find another question that addressed this. The full question is this: We're producing a wrapper for a database and have two different viewpoints on managing data with the wrapper. The first is that all changes made to a data object in code must be persisted in the database by calling a "save" method to actually save the changes. The other side is that these changes should be save as they are made, so if I change a property it's saved, I change another it's save as well. What are the pros/cons of either choice and which is the "proper" way to manage the data?

    Read the article

  • Database Insider - November 2012 issue

    - by Javier Puerta
    The November issue of the Database Insider newsletter is now available. (Full newsletter here) Mark Hurd: Oracle Database Wrap-up from Oracle OpenWorld 2012 Oracle executives kicked off Oracle OpenWorld 2012, discussing the needs of customers, the brand-new Oracle Exadata Database Machine X3, and the latest Oracle Database innovations. (Read More) Webcast: Introduction to Oracle Exadata Database Machine X3 Oracle’s next-generation database machine, Oracle Exadata X3, combines massive memory and low-cost disks to deliver the highest performance at the lowest cost. Available in an eight-rack configuration, it allows you to start small and grow. Webcast: SAP Applications Run Better on Oracle Exadata Find out why a growing number of SAP application customers are turning to Oracle Exadata Database Machine for better performance, better productivity—and big savings. 

    Read the article

  • Design Advice Needed For Synonyms Database

    - by James J
    I'm planning to put together a database that can be used to query synonyms of words. The database will end up huge, so the idea is to keep things running fast. I've been thinking about how to do this, but my database design skills are not up to scratch these days. My initial idea was to have each word stored in one table, and then another table with a 1 to many relationship where each word can be linked to another word and that table can be queried. The application I'm developing allows users to highlight a word, and then type in, or select some synonyms from the database for that word. The application learns from the user input so if someone highlights "car" and types in "motor" the database would be updated to link the relationship if it don't exist already. What I don't want to happen is for a user to type in the word "shop" and link it to the word car. So I'm thinking I will need to add some sort of weight to each relationship. Eventually the synonyms the users enter will be used so they can auto select common synonyms used with a certain word. The lower weight words will not be displayed so shop could never be a synonym of car unless it had a very high weight, and chances are nobody is going to do that. Does the above sound right? Can you offer any suggestions or improvements?

    Read the article

  • Multiple database support in django.

    - by codebreak
    From some forum I came to know that Multiple database support is added in Django at lower level, but the higher level apis are not added yet. Can anyone please tell me how one can achieve multiple database connections in Django. Does anyone have any idea by when Django will fully/officially support Multiple database connections.

    Read the article

  • C#, Manage concurrency in database access

    - by Goul
    Hi there, I have written a while ago an application used by multiple users to handle trades creation. I haven't done development for some time now and can't remember how I managed the concurrency between the users and so would have liked your advices in term of design. The application was as follow: - One heavy client per user - A single database - Access to the database for each user to insert/update/delete trades - A grid in the application reflecting the trades table. That grid being updated each time someone changes a deal. My questions: 1- Do you confirm I shouldn't care about the connection to the database for each application. Considering that there is a singleton in each, I would expect on connexion per client with no issue. 2- How preventing the concurrency of the accesses? I guess I should lock when modifying the data, however don't remember how to. 3- How to have the grid automatically updated whenever the database is (by another user for example)? Thank you in advance for your help!

    Read the article

  • Creating Simple desktop database application

    - by KoolKabin
    Hi guys, I am here to write a small database application that will be running in desktop (offline mode). I am using MSAccess 2007 as my database file and trying to write code in vb.net. I used to write the code vb6 an usually had global variables for storing database connection and executing every query from that. I am trying to upgrade myself from vb6 to vb.net. do i need to read some more simple starter books also?

    Read the article

  • Database design: one huge table or separate tables?

    - by littlegreen
    Currently I am designing a database for use in our company. We are using SQL Server 2008. The database will hold data gathered from several customers. The goal of the database is to acquire aggregate benchmark numbers over several customers. Recently, I have become worried with the fact that one table in particular will be getting very big. Each customer has approximately 20.000.000 rows of data, and there will soon be 30 customers in the database (if not more). A lot of queries will be done on this table. I am already noticing performance issues and users being temporarily locked out. My question, will we be able to handle this table in the future, or is it better to split this table up into smaller tables for each customer?

    Read the article

  • Export from a standalone database to an embedded database.

    - by jdana
    I have a two-part application, where there is a central database that is edited, and then at certain times, the data is released and distributed as its own application. I would like to use a standalone database for the central database (MySQL, Postgres, Oracle, SQL Server, etc.) and then have a reliable export to an embedded database (probably SQLite) for distribution. What tools/processes are available for such an export, or is it a practice to be avoided? EDIT: A couple of additional pieces of information. The distributed application should be able to run without having to connect to another server (ex: your spellchecker still works even you don't have internet), and I don't want to install a full DB server for read-only access to the data.

    Read the article

  • Updating access 2000 database through code in VB6

    - by Mark
    I have an application that uses an access 2000 database currently in distribution. I need to update one of the recordsets with additional fields on my customer's computers. My data controls work fine as I have them set to connect in access 2000 format but when I try to open the database in code, I get an unrecognized data format error. What is the best way to replace or add to the database on their machines? Any help is greatly appreciated.

    Read the article

  • Youtube Table structures

    - by Shyju
    Can anyone share me how does youtube stored video related information in there tables ? What would be the table structure and what would be the various columns in tables and the relations between them ? Thanks in advance

    Read the article

  • ???? Oracle11g ????????? No.2 - v$database.CURRENT_SCN

    - by Todd Bao
    «????Oracle 11g ???????»???????????,?11.2.0.3.0?????: select current_scn from v$database union all select current_scn from v$database; ??????????SCN,??????11.2.0.1.0???????????SCN?????? ??,????11.2.0.1.0????,11.2.0.3.0????X$KCCDI(V$DATABASE?????,??CURRENT_SCN??)??,?????????SCN? ----------------------------------------------------| Id  | Operation            | Name               |----------------------------------------------------|   0 | SELECT STATEMENT     |                    ||   1 |  MERGE JOIN CARTESIAN|                    ||*  2 |   FIXED TABLE FULL   | X$KCCDI            ||   3 |   BUFFER SORT        |                    ||   4 |    VIEW              | VW_JF_SET$6E0AEE5B ||   5 |     UNION-ALL        |                    ||   6 |      FIXED TABLE FULL| X$KCCDI2           ||   7 |      FIXED TABLE FULL| X$KCCDI2           |---------------------------------------------------- ??????11.2.0.3.0???????SQL??v$database????current_scn????????:???????X$KCCDI???dicur_scn(current_scn)??????? a. ???:????union all,???????,??????????X$KCCDI2(V$DATABASE??????)?VIEW????,??X$KCCDI?X$KCCDI2????,???X$KCCDI??,??: SYS@fmw//Scripts> run  1  select current_scn from v$database  2  union all select current_scn from v$database  3  union all select current_scn from v$database  4* union all select current_scn from v$databaseCURRENT_SCN-----------    5074384    5074385    5074385    50743854 rows selected. ??,X$KCCDI?????????,??????????SCN??????SCN????????“?”SCN? b. ???:???????,??: SYS@fmw//Scripts> run  1  select current_scn,status from v$database,v$instance  2  union all  3* select current_scn,status from v$database,v$instanceCURRENT_SCN + STATUS----------- + ------------------------    5075463 + OPEN    5075464 + OPEN2 rows selected. c. ???:?????????: SYS@fmw//Scripts> run  1* select a.current_scn,b.current_scn from v$database a,v$database bCURRENT_SCN + CURRENT_SCN----------- + -----------    5078328 +     50783291 row selected. ????UNION ALL?????? d. ??,???X$KCCDI??????????????????“??”??=D,????????X$?????????$???,???????,????V$DATABASE?????????????????: SYS@fmw//Scripts> run  1  select dicur_scn from x$kccdi  2* union all select dicur_scn from x$kccdiDICUR_SCN--------------------------------508218350821842 rows selected. SYS@fmw//Scripts> run  1* select a.dicur_scn,b.dicur_scn from x$kccdi a,x$kccdi bDICUR_SCN                        + DICUR_SCN-------------------------------- + --------------------------------5082913                          + 50829141 row selected. ??? Todd Bao ??,???????????,?????????SCN,????V$DATABASE.CURRENT_SCN?,???????“next scn”? ×??,???demo????11.2.0.3.???

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Fusion Concepts: Fusion Database Schemas

    - by Vik Kumar
    You often read about FUSION and FUSION_RUNTIME users while dealing with Fusion Applications. There is one more called FUSION_DYNAMIC. Here are some details on the difference between these three and the purpose of each type of schema. FUSION: It can be considered as an Administrator of the Fusion Applications with all the corresponding rights and powers such as owning tables and objects, providing grants to FUSION_RUNTIME.  It is used for patching and has grants to many internal DBMS functions. FUSION_RUNTIME: Used to run the Applications.  Contains no DB objects. FUSION_DYNAMIC: This schema owns the objects that are created dynamically through ADM_DDL. ADM_DDL is a package that acts as a wrapper around the DDL statement. ADM_DDL support operations like truncate table, create index etc. As the above statements indicate that FUSION owns the tables and objects including FND tables so using FUSION to run applications is insecure. It would be possible to modify security policies and other key information in the base tables (like FND) to break the Fusion Applications security via SQL injection etc. Other possibilities would be to write a logon DB trigger and steal credentials etc. Thus, to make Fusion Applications secure FUSION_RUNTIME is granted privileges to execute DMLs only on APPS tables. Another benefit of having separate users is achieving Separation of Duties (SODs) at schema level which is required by auditors. Below are the roles and privileges assigned to FUSION, FUSION_RUNTIME and FUSION_DYNAMIC schema: FUSION It has the following privileges: Create SESSION Do all types of DDL owned by FUSION. Additionally, some specific priveleges on other schemas is also granted to FUSION. EXECUTE ON various EDN_PUBLISH_EVENT It has the following roles: CTXAPP for managing Oracle Text Objects AQ_SER_ROLE and AQ_ADMINISTRATOR_ROLE for managing Advanced Queues (AQ) FUSION_RUNTIME It has the following privileges: CREATE SESSION CHANGE NOTIFICATION EXECUTE ON various EDN_PUBLISH_EVENT It has the following roles: FUSION_APPS_READ_WRITE for performing DML (Select, Insert, Delete) on Fusion Apps tables FUSION_APPS_EXECUTE for performing execute on objects such as procedures, functions, packages etc. AQ_SER_ROLE and AQ_ADMINISTRATOR_ROLE for managing Advanced Queues (AQ) FUSION_DYNAMIC It has following privileges: CREATE SESSION, PROCEDURE, TABLE, SEQUENCE, SYNONYM, VIEW UNLIMITED TABLESPACE ANALYZE ANY CREATE MINING MODEL EXECUTE on specific procedure, function or package and SELECT on specific tables. This depends on the objects identified by product teams that ADM_DDL needs to have access  in order to perform dynamic DDL statements. There is one more role FUSION_APPS_READ_ONLY which is not attached to any user and has only SELECT privilege on all the Fusion objects. FUSION_RUNTIME does not have any synonyms defined to access objects owned by FUSION schema. A logon trigger is defined in FUSION_RUNTIME which sets the current schema to FUSION and eliminates the need of any synonyms.   What it means for developers? Fusion Application developers should be using FUSION_RUNTIME for testing and running Fusion Applications UI, BC and to connect to any SQL front end like SQL *PLUS, SQL Loader etc. For testing ADFbc using AM tester while using FUSION_RUNTIME you may hit the following error: oracle.jbo.JboException: JBO-29000: Unexpected exception caught: java.sql.SQLException, msg=invalid name pattern: FUSION.FND_TABLE_OF_VARCHAR2_255 The fix is to add the below JVM parameter in the Run/Debug client property in the Model project properties -Doracle.jdbc.createDescriptorUseCurrentSchemaForSchemaName=true More details are discussed in this forum thread for it.

    Read the article

  • How the "migrations" approach makes database continuous integration possible

    - by David Atkinson
    Testing a database upgrade script as part of a continuous integration process will only work if there is an easy way to automate the generation of the upgrade scripts. There are two common approaches to managing upgrade scripts. The first is to maintain a set of scripts as-you-go-along. Many SQL developers I've encountered will store these in a folder prefixed numerically to ensure they are ordered as they are intended to be run. Occasionally there is an accompanying document or a batch file that ensures that the scripts are run in the defined order. Writing these scripts during the course of development requires discipline. It's all too easy to load up the table designer and to make a change directly to the development database, rather than to save off the ALTER statement that is required when the same change is made to production. This discipline can add considerable overhead to the development process. However, come the end of the project, everything is ready for final testing and deployment. The second development paradigm is to not do the above. Changes are made to the development database without considering the incremental update scripts required to effect the changes. At the end of the project, the SQL developer or DBA, is tasked to work out what changes have been made, and to hand-craft the upgrade scripts retrospectively. The end of the project is the wrong time to be doing this, as the pressure is mounting to ship the product. And where data deployment is involved, it is prudent not to feel rushed. Schema comparison tools such as SQL Compare have made this latter technique more bearable. These tools work by analyzing the before and after states of a database schema, and calculating the SQL required to transition the database. Problem solved? Not entirely. Schema comparison tools are huge time savers, but they have their limitations. There are certain changes that can be made to a database that can't be determined purely from observing the static schema states. If a column is split, how do we determine the algorithm required to copy the data into the new columns? If a NOT NULL column is added without a default, how do we populate the new field for existing records in the target? If we rename a table, how do we know we've done a rename, as we could equally have dropped a table and created a new one? All the above are examples of situations where developer intent is required to supplement the script generation engine. SQL Source Control 3 and SQL Compare 10 introduced a new feature, migration scripts, allowing developers to add custom scripts to replace the default script generation behavior. These scripts are committed to source control alongside the schema changes, and are associated with one or more changesets. Before this capability was introduced, any schema change that required additional developer intent would break any attempt at auto-generation of the upgrade script, rendering deployment testing as part of continuous integration useless. SQL Compare will now generate upgrade scripts not only using its diffing engine, but also using the knowledge supplied by developers in the guise of migration scripts. In future posts I will describe the necessary command line syntax to leverage this feature as part of an automated build process such as continuous integration.

    Read the article

  • SQL SERVER – How to Roll Back SQL Server Database Changes

    - by Pinal Dave
    In a perfect scenario, no unexpected and unplanned changes occur. There are no unpleasant surprises, no inadvertent changes. However, even with all precautions and testing, there is sometimes a need to revert a structure or data change. One of the methods that can be used in this situation is to use an older database backup that has the records or database object structure you want to revert to. For this method, you have to have the adequate full database backup and a tool that will help you with comparison and synchronization is preferred. In this article, we will focus on another method: rolling back the changes. This can be done by using: An option in SQL Server Management Studio T-SQL, or ApexSQL Log The first two solutions have been described in this article The disadvantages of these methods are that you have to know when exactly the change you want to revert happened and that all transactions on the database executed in a specific time range are rolled back – the ones you want to undo and the ones you don’t. How to easily roll back SQL Server database changes using ApexSQL Log? The biggest challenge is to roll back just specific changes, not all changes that happened in a specific time range. While SQL Server Management Studio option and T-SQL read and roll forward all transactions in the transaction log files, I will show you a solution that finds and scripts only the specific changes that match your criteria. Therefore, you don’t need to worry about all other database changes that you don’t want to roll back. ApexSQL Log is a SQL Server disaster recovery tool that reads transaction logs and provides a wide range of filters that enable you to easily rollback only specific data changes. First, connect to the online database where you want to roll back the changes. Once you select the database, ApexSQL Log will show its recovery model. Note that changes can be rolled back even for a database in the Simple recovery model, when no database and transaction log backups are available. However, ApexSQL Log achieves best results when the database is in the Full recovery model and you have a chain of subsequent transaction log backups, back to the moment when the change occurred. In this example, we will use only the online transaction log. In the next step, use filters to read only the transactions that happened in a specific time range. To remove noise, it’s recommended to use as many filters as possible. Besides filtering by the time of the transaction, ApexSQL Log can filter by the operation type: Table name: As well as transaction state (committed, aborted, running, and unknown), name of the user who committed the change, specific field values, server process IDs, and transaction description. You can select only the tables affected by the changes you want to roll back. However, if you’re not certain which tables were affected, you can leave them all selected and once the results are shown in the main grid, analyze them to find the ones you to roll back. When you set the filters, you can select how to present the results. ApexSQL Log can automatically create undo or redo scripts, export the transactions into an XML, HTML, CSV, SQL, or SQL Bulk file, and create a batch file that you can use for unattended transaction log reading. In this example, I will open the results in the grid, as I want to analyze them before rolling back the transactions. The results contain information about the transaction, as well as who and when made it. For UPDATEs, ApexSQL Log shows both old and new values, so you can easily see what has happened. To create an UNDO script that rolls back the changes, select the transactions you want to roll back and click Create undo script in the menu. For the DELETE statement selected in the screenshot above, the undo script is: INSERT INTO [Sales].[PersonCreditCard] ([BusinessEntityID], [CreditCardID], [ModifiedDate]) VALUES (297, 8010, '20050901 00:00:00.000') When it comes to rolling back database changes, ApexSQL Log has a big advantage, as it rolls back only specific transactions, while leaving all other transactions that occurred at the same time range intact. That makes ApexSQL Log a good solution for rolling back inadvertent data and schema changes on your SQL Server databases. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: ApexSQL

    Read the article

  • postgreSQL vs Cassandra vs MongoDB vs Voldemart ?

    - by ramonrails
    Which database to decide upon? Any comparisions? Existing: postgresql Issues Not easily scalable horizontal. Needs sharding etc Clustering does not solve the data growth problem Looking for: Any database that is easily horizontally scalable Cassandra (Twitter uses that?) MongoDB (rapidly gaining popularity) Voldemart Other? Why? Data growing with snowball effect existing postgresql locks table etc for vaccuum tasks periodically Archiving data is tideous currently Human interaction involved in existing archive, vaccuum, ... process periodically Need a 'set it. forget it. just add another server when data grows more.' type of solution

    Read the article

  • What file format/database format does Picasa use?

    - by Raymond
    I am trying to figure out what file format the .db file and .pmp files are. I tried using db_dump (Berkeley DB) for the .db files, but it seems that they are not Berkeley DB, or of an older version. I have no idea what the .PMP files are. Directory of C:\Users\me\AppData\Local\Google\Picasa2\db3 6/09/2010 08:07 PM 303,748 imagedata_uid64.pmp 1/18/2010 10:34 PM 4,885 imagedata_unification_lhlist.pmp 6/09/2010 10:55 PM 155,752 imagedata_width.pmp 6/09/2010 10:55 PM 1,286,346,614 previews_0.db 6/10/2010 10:06 AM 467,168 previews_index.db Any help appreciated.

    Read the article

  • Daemon/Software that takes changes from sql database and applies them to unix config files

    - by Dude Man
    I was wondering if there was a unix daemon available that would be capable of something like this: admin adds an IP entry to a DB; daemon finds change after wait interval and manipulates ifconfig/config files I was thinking maybe there is a plugin for cfengine that might be able to do this, but I couldn't find any. I mean this would be a fairly easy thing to script up in perl, but why re-invent the wheel if theres already something out there that is better than what my limited programming abilities can make. Lastly, if it worked on FreeBSD that'd be great.

    Read the article

  • On MySQL 5.1 for Windows, why can't I assign DBA role to the "root" user?

    - by djangofan
    On MySQL 5.1 for Windows, why can't I assign DBA role to "root" user? The MySQL Workbench allows me to add all the other roles except for DBA. Also, when I "alter schema" on any table, while logged in as root, I dont see all the tabs that show me all the database properties... I only see the first tab that allows me to change collation only. What is wrong with this picture? How do i give root all priveleges? I've tried a few variations of GRANT ALL PRIVILEGES etc. from the command line but nothing works. My root account is unable to alter column names, indexes, or options of any given table that I create. I can create tables and delete them but I can't alter them.

    Read the article

  • DB2 Integrity Checks and Exception Tables

    - by imthefirestartr
    I am working on planning a migration of a DB2 8.1 database from a horrible IBM encoding to UTF-8 to support further languages etc. I am encountering an issue that I am stuck on. A few notes on this migration: We are using db2move to export and load the data and db2look to get the details fo the database (tablespaces, tables, keys etc). We found the loading process worked nicely with db2move import, however, the data takes 7 hours to load and this was unacceptable downtime when we actually complete the conversion on the main database. We are now using db2move load, which is much faster as it seems to simply throw the data in without integrity checks. Which leads to my current issue. After completing the db2move load process, several tables are in a check pending state and require integrity checks. Integrity checks are done via the following: set integrity for . immediate checked This works for most tables, however, some tables give an error: DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL3603N Check data processing through the SET INTEGRITY statement has found integrity violation involving a constraint with name "blah.SQL120124110232400". SQLSTATE=23514 The internets tell me that the solution to this issue is to create an exception table based on the actual table and tell the SET INTEGRITY command to send any exceptions to that table (as below): db2 create table blah_EXCEPTION like blah db2 SET INTEGRITY FOR blah IMMEDIATE CHECKED FOR EXCEPTION IN blah USE blah_EXCEPTION NOW, here is the specific issue I am having! The above forces all the rows with issues to the specified exception table. Well that's just super, buuuuuut I can not lose data in this conversion, its simply unacceptable. The internets and IBM has a vague description of sending the violations to the exception tables and then "dealing with the data" that is in the exception table. Unfortunately, I am not clear what this means and I was hoping that some wise individual knows and could help me out and let me know how I can retrieve this data from these tables and place the data in the original/proper table rather than these exception tables. Let me know if you have any questions. Thanks!

    Read the article

  • MySQL Database Replication and Server Load

    - by Willy
    Hi Everyone, I have an online service with around 5000 MySQL databases. Now, I am interested in building a development area of the exactly same environment in my office, therefore, I am about to setup MySQL replication between my live MySQL server and development MySQL server. But my concern is the load which will occur on my live MySQL server once replication is started. Do you have any experience? Will this process cause extra load on my production server? Thanks, have a nice weekend.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >