Search Results

Search found 31356 results on 1255 pages for 'database backups'.

Page 11/1255 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Database tables - how many database?

    - by Thomas
    How many databases are needed for a social website? I have my tech team working on developing a social site but all their tables are in 1 database. I wanted to create separate table sets for user data, temporary tables, etc and thinking maybe have one separate database only for critical data, etc but I am not a tech person and now sure how this works? The site is going to be a local reviews website.

    Read the article

  • 11gR2 11.2.0.3 Database Certified with E-Business Suie

    - by Elke Phelps (Oracle Development)
    The 11gR2 11.2.0.2 Database was certified with E-Business Suite (EBS) 11i and EBS 12 almost one year ago today.  I’m pleased to announce that 11.2.0.3, the second patchset for the 11gR2 Database is now certified. Be sure to review the interoperability notes for R11i and R12 for the most up-to-date requirements for deployment. This certification announcement is important as you plan upgrades to the technology stack for your environment. For additional upgrade direction, please refer to the recently published EBS upgrade recommendations article. Database support implications may also be reviewed in the database patching and support article. Oracle E-Business Suite Release 11i Prerequisites 11.5.10.2 + ATG PF.H RUP 6 and higher Certified Platforms Linux x86 (Oracle Linux 4, 5) Linux x86 (RHEL 4, 5) Linux x86 (SLES 10) Linux x86-64 (Oracle Linux 4, 5) -- Database-tier only Linux x86-64 (RHEL 4, 5) -- Database-tier only Linux x86-64 (SLES 10--Database-tier only) Oracle Solaris on SPARC (64-bit) (10) Oracle Solaris on x86-64 (64-bit) (10) -- Database-tier only Pending Platform Certifications Microsoft Windows Server (32-bit) Microsoft Windows Server (64-bit) HP-UX PA-RISC (64-bit) HP-UX Itanium IBM: Linux on System z  IBM AIX on Power Systems Oracle E-Business Suite Release 12 Prerequisites Oracle E-Business Suite Release 12.0.4 or later; or,Oracle E-Business Suite Release 12.1.1 or later Certified Platforms Linux x86 (Oracle Linux 4, 5) Linux x86 (RHEL 4, 5) Linux x86 (SLES 10) Linux x86-64 (Oracle Linux 4, 5) Linux x86-64 (RHEL 4, 5) Linux x86-64 (SLES 10) Oracle Solaris on SPARC (64-bit) (10) Oracle Solaris on x86-64 (64-bit) (10)  -- Database-tier only Pending Platform Certifications Microsoft Windows Server (32-bit) Microsoft Windows Server (64-bit) HP-UX PA-RISC (64-bit) IBM: Linux on System z IBM AIX on Power Systems HP-UX Itanium Database Feature and Option CertificationsThe following 11gR2 11.2.0.2 database options and features are supported for use: Advanced Compression Active Data Guard Advanced Security Option (ASO) / Advanced Networking Option (ANO) Database Vault  Database Partitioning Data Guard Redo Apply with Physical Standby Databases Native PL/SQL compilation Oracle Label Security (OLS) Real Application Clusters (RAC) Real Application Testing SecureFiles Virtual Private Database (VPD) Certification of the following database options and features is still underway: Transparent Data Encryption (TDE) Column Encryption 11gR2 version 11.2.0.3 Transparent Data Encryption (TDE) Tablespace Encryption 11gR2 version 11.2.0.3 About the pending certifications Oracle's Revenue Recognition rules prohibit us from discussing certification and release dates, but you're welcome to monitor or subscribe to this blog for updates, which I'll post as soon as soon as they're available.     EBS 11i References Interoperability Notes - Oracle E-Business Suite Release 11i with Oracle Database 11g Release 2 (11.2.0) (Note 881505.1) Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 11i (Note 823586.1) Encrypting Oracle E-Business Suite Release 11i Network Traffic using Advanced Security Option and Advanced Networking Option (Note 391248.1) Using Transparent Data Encryption with Oracle E-Business Release 11i (Note 403294.1) Integrating Oracle E-Business Suite Release 11i with Oracle Database Vault 11gR2 (Note 1091086.1) Using Oracle E-Business Suite with a Split Configuration Database Tier on Oracle 11gR2 Version 11.2.0.1.0 (Note 946413.1) Export/Import Process for Oracle E-Business Suite Release 11i Database Instances Using Oracle Database 11g Release 1 or 2 (Note 557738.1) Database Initialization Parameters for Oracle Applications Release 11i (Note 216205.1) EBS 12 References Interoperability Notes - Oracle E-Business Suite Release 12 with Oracle Database 11g Release 2 (11.2.0) (Note 1058763.1) Database Initialization Parameters for Oracle Applications Release 12 (Note 396009.1) Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 (Note 823587.1) Using Transparent Data Encryption with Oracle E-Business Suite Release 12 (Note 732764.1) Integrating Oracle E-Business Suite Release 12 with Oracle Database Vault 11gR2 (Note 1091083.1) Export/Import Process for Oracle E-Business Suite Release 12 Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 (Note 741818.1) Enabling SSL in Oracle Applications Release 12 (Note 376700.1) Related Articles 11gR2 Database Certified with E-Business Suite 11i 11gR2 Database Certified with E-Business Suite 12 11gR2 11.2.0.2 Database Certified with E-Business Suite 12 Can E-Business Users Apply Database Patch Set Updates? On Apps Tier Patching and Support: A Primer for E-Business Suite Users On Database Patching and Support:  A Primer for E-Business Suite Users Quarterly E-Business Suite Upgrade Recommendations;  October 2011 Edition The preceding is intended to outline our general product direction.  It is intended for information purposes only, and may not be incorporated into any contract.   It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decision.  The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle.

    Read the article

  • android database leak found IllegalStateException

    - by saravanan
    04-20 16:53:39.010: ERROR/Database(419): Leak found 04-20 16:53:39.010: ERROR/Database(419): java.lang.IllegalStateException: mPrograms size 1 04-20 16:53:39.010: ERROR/Database(419): at android.database.sqlite.SQLiteDatabase.finalize(SQLiteDatabase.java:1668) 04-20 16:53:39.010: ERROR/Database(419): at dalvik.system.NativeStart.run(Native Method) 04-20 16:53:39.010: ERROR/Database(419): Caused by: java.lang.IllegalStateException: /data/data/com.example.search/databases/rlite.db SQLiteDatabase created and never closed 04-20 16:53:39.010: ERROR/Database(419): at android.database.sqlite.SQLiteDatabase.(SQLiteDatabase.java:1694) 04-20 16:53:39.010: ERROR/Database(419): at android.database.sqlite.SQLiteDatabase.openDatabase(SQLiteDatabase.java:738) 04-20 16:53:39.010: ERROR/Database(419): at android.database.sqlite.SQLiteDatabase.openOrCreateDatabase(SQLiteDatabase.java:760) 04-20 16:53:39.010: ERROR/Database(419): at android.database.sqlite.SQLiteDatabase.openOrCreateDatabase(SQLiteDatabase.java:753) 04-20 16:53:39.010: ERROR/Database(419): at android.app.ApplicationContext.openOrCreateDatabase(ApplicationContext.java:473) 04-20 16:53:39.010: ERROR/Database(419): at android.content.ContextWrapper.openOrCreateDatabase(ContextWrapper.java:193) 04-20 16:53:39.010: ERROR/Database(419): at android.database.sqlite.SQLiteOpenHelper.getWritableDatabase(SQLiteOpenHelper.java:98) 04-20 16:53:39.010: ERROR/Database(419): at com.example.search.Database.(Database.java:33) 04-20 16:53:39.010: ERROR/Database(419): at com.example.search.JobDetails.applyJob(JobDetails.java:120) 04-20 16:53:39.010: ERROR/Database(419): at com.example.search.JobDetails.jobdetailsAction(JobDetails.java:98) 04-20 16:53:39.010: ERROR/Database(419): at java.lang.reflect.Method.invokeNative(Native Method) 04-20 16:53:39.010: ERROR/Database(419): at java.lang.reflect.Method.invoke(Method.java:521) 04-20 16:53:39.010: ERROR/Database(419): at android.view.View$1.onClick(View.java:2026) 04-20 16:53:39.010: ERROR/Database(419): at android.view.View.performClick(View.java:2364) 04-20 16:53:39.010: ERROR/Database(419): at android.view.View.onTouchEvent(View.java:4179) 04-20 16:53:39.010: ERROR/Database(419): at android.widget.TextView.onTouchEvent(TextView.java:6540) 04-20 16:53:39.010: ERROR/Database(419): at android.view.View.dispatchTouchEvent(View.java:3709) 04-20 16:53:39.010: ERROR/Database(419): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 04-20 16:53:39.010: ERROR/Database(419): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 04-20 16:53:39.010: ERROR/Database(419): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 04-20 16:53:39.010: ERROR/Database(419): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 04-20 16:53:39.010: ERROR/Database(419): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 04-20 16:53:39.010: ERROR/Database(419): at com.android.internal.policy.impl.PhoneWindow$DecorView.superDispatchTouchEvent(PhoneWindow.java:1659) 04-20 16:53:39.010: ERROR/Database(419): at com.android.internal.policy.impl.PhoneWindow.superDispatchTouchEvent(PhoneWindow.java:1107) 04-20 16:53:39.010: ERROR/Database(419): at android.app.Activity.dispatchTouchEvent(Activity.java:2061) 04-20 16:53:39.010: ERROR/Database(419): at com.android.internal.policy.impl.PhoneWindow$DecorView.dispatchTouchEvent(PhoneWindow.java:1643) 04-20 16:53:39.010: ERROR/Database(419): at android.view.ViewRoot.handleMessage(ViewRoot.java:1691) 04-20 16:53:39.010: ERROR/Database(419): at android.os.Handler.dispatchMessage(Handler.java:99) 04-20 16:53:39.010: ERROR/Database(419): at android.os.Looper.loop(Looper.java:123) 04-20 16:53:39.010: ERROR/Database(419): at android.app.ActivityThread.main(ActivityThread.java:4363) 04-20 16:53:39.010: ERROR/Database(419): at java.lang.reflect.Method.invokeNative(Native Method) 04-20 16:53:39.010: ERROR/Database(419): at java.lang.reflect.Method.invoke(Method.java:521) 04-20 16:53:39.010: ERROR/Database(419): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 04-20 16:53:39.010: ERROR/Database(419): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 04-20 16:53:39.010: ERROR/Database(419): at dalvik.system.NativeStart.main(Native Method) when i read the database show error like this. please do reply me

    Read the article

  • Cache the result of a MySQLdb database query in memory

    - by ensnare
    Our application fetches the correct database server from a pool of database servers. So each query is really 2 queries, and they look like this: Fetch the correct DB server Execute the query We do this so we can take DB servers online and offline as necessary, as well as for load-balancing. But the first query seems like it could be cached to memory, so it only actually queries the database every 5 or 10 minutes or so. What's the best way to do this? Thanks.

    Read the article

  • Database structure - is mySQL the right choice?

    - by Industrial
    Hi everyone, We are currently planning the database structure of a quite complex e-commerce web app that has flexibility as it's main cornerstone. Our app features a large amount of data (products) and we have run into a slight headache trying to keep performance high without compromizing normalization rules in the database, or leaving our highly beloved flexibility concept behind when integrating product options (also widely known as product attributes or parameters). Based on various references and sources available, we have made up lists on pros and cons of all major and well known database patterns to solve this. After comparing these, we have come up with two final alternatives: EAV (Entity-attribute-value model) : Pros: Database is used for all sorting. Cons: All related queries will include a number of joins between multiple tables in order to complete the collection of data. SLOB (Serialized LOB, also known as Facade?) : Pros: Very flexible. Keeping the number of necessary joins low compared to a EAV design pattern. Easy to update/add/remove data from each product. Cons: All sorting will be done by the application instead of the database. Will use lots of performance (memory?) when big datasets is processed by a large number of users. Our main questions: Which pattern/structure would you use, or maybe even a different solution? Is there better databases besides mySQL available nowadays to accomplish what we want? Thanks a lot! Reference: http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters

    Read the article

  • Small standalone SQL database similar to access in the old days(ie file database)

    - by Ian
    Hi, I am looking for a easy to use and deploy sql type database i can ship with a desktop application. This will be a small application user's can download from my website. In the vb6 days, access was the common database for small desktop apps, what is my option these days? Looking at SQL CE it seems to have a quite a few limitations such as count(distinct) etc SQL express needs to be installed and running as a service (could i include the SQL express deployments in my deployment so the user doesn't even know its been installed? I assume size would then be an issue) SQL 2005/2008 is not an option due to size and licensing restrictions. I would like to use c#, wpf and entity framework. What would seem to be the best options based on your knowledge and experience? Thanks

    Read the article

  • How to Design a SaaS Database

    - by Josh Curren
    I have a web app that I built for a trucking company that I would like to offer as SaaS. What is the best way to design the database? Should I create a new database for each company? Or should I use one database with tables that have a prefix of the company name? Or should I Use one database with one of each table and just add a company id field to the tables? Or is there some other way to do it?

    Read the article

  • SQLAuthority News – History of the Database – 5 Years of Blogging at SQLAuthority

    - by pinaldave
    Don’t miss the Contest:Participate in 5th Anniversary Contest   Today is this blog’s birthday, and I want to do a fun, informative blog post. Five years ago this day I started this blog. Intention – my personal web blog. I wrote this blog for me and still today whatever I learn I share here. I don’t want to wander too far off topic, though, so I will write about two of my favorite things – history and databases.  And what better way to cover these two topics than to talk about the history of databases. If you want to be technical, databases as we know them today only date back to the late 1960’s and early 1970’s, when computers began to keep records and store memories.  But the idea of memory storage didn’t just appear 40 years ago – there was a history behind wanting to keep these records. In fact, the written word originated as a way to keep records – ancient man didn’t decide they suddenly wanted to read novels, they needed a way to keep track of the harvest, of their flocks, and of the tributes paid to the local lord.  And that is how writing and the database began.  You could consider the cave paintings from 17,0000 years ago at Lascaux, France, or the clay token from the ancient Sumerians in 8,000 BC to be the first instances of record keeping – and thus databases. If you prefer, you can consider the advent of written language to be the first database.  Many historians believe the first written language appeared in the 37th century BC, with Egyptian hieroglyphics. The ancient Sumerians, not to be outdone, also created their own written language within a few hundred years. Databases could be more closely described as collections of information, in which case the Sumerians win the prize for the first archive.  A collection of 20,000 stone tablets was unearthed in 1964 near the modern day city Tell Mardikh, in Syria.  This ancient database is from 2,500 BC, and appears to be a sort of law library where apprentice-scribes copied important documents.  Further archaeological digs hope to uncover the palace library, and thus an even larger database. Of course, the most famous ancient database would have to be the Royal Library of Alexandria, the great collection of records and wisdom in ancient Egypt.  It was created by Ptolemy I, and existed from 300 BC through 30 AD, when Julius Caesar effectively erased the hard drives when he accidentally set fire to it.  As any programmer knows who has forgotten to hit “save” or has experienced a sudden power outage, thousands of hours of work was lost in a single instant. Databases existed in very similar conditions up until recently.  Cuneiform tablets gave way to papyrus, which led to vellum, and eventually modern paper and the printing press.  Someday the databases we rely on so much today will become another chapter in the history of record keeping.  Who knows what the databases of tomorrow will look like! Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Should we have a database independent SQL like query language in Django?

    - by Yugal Jindle
    Note : I know we have Django ORM already that keeps things database independent and converts to the database specific SQL queries. Once things starts getting complicated it is preferred to write raw SQL queries for better efficiency. When you write raw sql queries your code gets trapped with the database you are using. I also understand its important to use the full power of your database that can-not be achieved with the django orm alone. My Question : Until I use any database specific feature, why should one be trapped with the database. For instance : We have a query with multiple joins and we decided to write a raw sql query. Now, that makes my website postgres specific. Even when I have not used any postgres specific feature. I feel there should be some fake sql language which can translate to any database's sql query. Even Django's ORM can be built over it. So, that if you go out of ORM but not database specific - you can still remain database independent. I asked the same question to Jacob Kaplan Moss (In person) : He advised me to stay with the database that I like and endure its whole power, to which I agree. But my point was not that we should be database independent. My point is we should be database independent until we use a database specific feature. Please explain, why should be there a fake sql layer over the actual sql ?

    Read the article

  • Files backup utility with incremental backups that would keep backup device clean

    - by Wojtek
    I've tested a few of backup utilities and still haven't found one that would satisfy me. Almost every one of them has two options: - full backup - not an option to use frequently - incremental backup - seems right, but there's one thing about it: Incremental backup builds on a base of a full backup, backing up only those files, that were created/changed. The thing is, that after some time you've got a lot of unwanted files from the old backups bloating your backup device. Also, if you'd accidentally delete your full (first) backup file, then the differential backups would be corrupted (you wouldn't be able to restore them). The thing I'm looking for is a program, that would backup files simply by copying them. It would check the backup device whether it contains the file (unchanged): - if yes, it should proceed to the next file (we've got current version backed up) - if no, it would copy the file to the backup device - if the device contains a file that is no longer on our disk, the program would delete it from the backup device Is there any such utility, that would work this way? If not, do you have any hints on how to backup fairly big amounts of data (around 20gb) quite frequently with incremental backups and not be exposed to those unwanted effects of backup size puffing up?

    Read the article

  • SQL SERVER World Shapefile Download and Upload to Database Spatial Database

    During my recent, training I was asked by a student if I know a place where he can download spatial files for all the countries around the world, as well as if there is a way to upload shape files to a database. Here is a quick tutorial for it.VDS Technologies has all the spatial [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • 11gR2 DB 11.2.0.1 Certified with E-Business Suite on Solaris 10 (x86-64)

    - by Steven Chan
    Oracle Database 11g Release 2 version 11.2.0.1 is now certified with Oracle E-Business Suite 11i (11.5.10.2) and Release 12 (12.0.4 or higher, 12.1.1 or higher) on Oracle Solaris on x86-64 (64-bit) running Solaris 10. This announcement includes:Oracle Database 11gR2 version 11.2.0.1 Oracle Database 11gR2 version 11.2.0.1 Real Application Clusters (RAC) Transparent Data Encryption (TDE) Column Encryption with EBS 11i and R12Advanced Security Option (ASO)/Advanced Networking Option (ANO) Export/Import Process for E-Business Suite 11i and R12 Database Instances Transparent Data Encryption (TDE) Tablespace Encryption

    Read the article

  • EPM 11.1.2 - R&A DATABASE CONNECTIONS DISAPPEAR FROM THE "DATABASE CONNECTION MANAGER

    - by Powder
    When accessing the database connection panel through Reporting and Analysis all previously entered database connection do not appear. This is due to a bug in the Windows SMB2 protocol. To work around this bug you have to disable the protocol. On Windows 2008 the protocol is automatically enabled. This needs to be done on both the servers and the clients. Note that “server” is the server which hosts RAF repository service and RM1 folder, “client” – server which hosts replicated Repository service that accesses repository files via network i.e. \\<server_host>\RM1  In order to disable SMB 2.0 on the server side, follow these steps:  1. Run "regedit" on Windows Server 2008 based computer.  2. Expand and locate the sub tree as follows.  HKLM\System\CurrentControlSet\Services\LanmanServer\Parameters  3. Add a new REG_DWORD key with the name of "Smb2" (without quotation mark)  Value name: Smb2  Value type: REG_DWORD  0 = disabled  1 = enabled 4. Set the value to 0 to disable SMB 2.0, or set it to 1 to re-enable SMB 2.0.  5. Reboot the server.  To disable SMB 2.0 for Windows Vista or Windows Server 2008 systems that are the “client” systems run the following commands:  sc config lanmanworkstation depend= bowser/mrxsmb10/nsi  sc config mrxsmb20 start= disabled  Note there's an extra " " (space) after the "=" sign.  To enable back SMB 2.0 for Windows Vista or Windows Server 2008 systems that  are the “client” systems run the following commands: sc config lanmanworkstation depend= bowser/mrxsmb10/mrxsmb20/nsi  sc config mrxsmb20 start= auto  Again, note there's an extra " " (space) after the "=" sign. 

    Read the article

  • Database design and performance impact

    - by Craige
    I have a database design issue that I'm not quite sure how to approach, nor if the benefits out weigh the costs. I'm hoping some P.SE members can give some feedback on my suggested design, as well as any similar experiences they may have came across. As it goes, I am building an application that has large reporting demands. Speed is an important issue, as there will be peak usages throughout the year. This application/database has a multiple-level, many-to-many relationship. eg object a object b object c object d object b has relationship to object a object c has relationship to object b, a object d has relationship to object c, b, a Theoretically, this could go on for unlimited levels, though logic dictates it could only go so far. My idea here, to speed up reporting, would be to create a syndicate table that acts as a global many-to-many join table. In this table (with the given example), one might see: +----------+-----------+---------+ | child_id | parent_id | type_id | +----------+-----------+---------+ | b | a | 1 | | c | b | 2 | | c | a | 3 | | d | c | 4 | | d | b | 5 | | d | a | 6 | +----------+-----------+---------+ Where a, b, c and d would translate to their respective ID's in their respective tables. So, for ease of reporting all of a which exist on object d, one could query SELECT * FROM `syndicates` ... JOINS TO child and parent tables ... WHERE parent_id=a and type_id=6; rather than having a query with a join to each level up the chain. The Problem This table grows exponentially, and in a given year, could easily grow past 20,000 records for one client. Given multiple clients over multiple years, this table will VERY quickly explode to millions of records and beyond. Now, the database will, in time, be partitioned across multiple servers, but I would like (as most would) to keep the number of servers as low as possible while still offering flexibility. Also writes and updates would be exponentially longer (though possibly not noticeable to the end user) as there would be multiple inserts/updates/scans on this table to keep it in sync. Am I going in the right direction here, or am I way off track. What would you do in a similar situation? This solution seems overly complex, but allows the greatest flexibility and fastest read-operations. Sidenote 1 - This structure allows me to add new levels to the tree easily. Sidenote 2 - The database querying for this database is done through an ORM framework.

    Read the article

  • SQL Server transaction log backups,

    - by krimerd
    Hi there, I have a question regarding the transaction log backups in sql server 2008. I am currently taking full backups once a week (Sunday) and transaction log backups daily. I put full backup in folder1 on Sunday and then on Monday I also put the 1st transaction log backup in the same folder. On tuesday, before I take the 2nd transaction log backup I move the first transaction log backup from folder1 an put it into folder2 and then I take the 2nd transaction log backup and put it in the folder1. Same thing on Wed, Thurs and so on. Basicaly in folder1 I always have the latest full backup and the latest transaction log backup while the other transaction log backups are in folder2. My questions is, when sql server is about to take, lets say 4th (Thursday) transaction log backup, does it look for the previous transac log backups (1st, 2nd, and 3rd) so that this new backup will only include the transactions from the last backup or it has some other way of knowing whether there are other transac log backups. Basically, I am asking this because all my transaction log backups seem to be about the same size and I thought that their size will depend on the amount of transactions since the last transaction log backup. Can anyone please explain if my assumptions are right? Thanks...

    Read the article

  • List of resources for database continuous integration

    - by David Atkinson
    Because there is so little information on database continuous integration out in the wild, I've taken it upon myself to aggregate as much as possible and post the links to this blog. Because it's my area of expertise, this will focus on SQL Server and Red Gate tooling, although I am keen to include any quality articles that discuss the topic in general terms. Please let me know if you find a resource that I haven't listed! General database Continuous Integration · What is Database Continuous Integration? (David Atkinson) · Continuous Integration for SQL Server Databases (Troy Hunt) · Installing NAnt to drive database continuous integration (David Atkinson) · Continuous Integration Tip #3 - Version your Databases as part of your automated build (Doug Rathbone) · How the "migrations" approach makes database continuous integration possible (David Atkinson) · Continuous Integration for the Database (Keith Bloom) Setting up Continuous Integration with Red Gate tools · Continuous integration for databases using Red Gate tools - A technical overview (White Paper, Roger Hart and David Atkinson) · Continuous integration for databases using Red Gate SQL tools (Product pages) · Database continuous integration step by step (David Atkinson) · Database Continuous Integration with Red Gate Tools (video, David Atkinson) · Database schema synchronisation with RedGate (Vincent Brouillet) · Database continuous integration and deployment with Red Gate tools (David Duffett) · Automated database releases with TeamCity and Red Gate (Troy Hunt) · How to build a database from source control (David Atkinson) · Continuous Integration Automated Database Update Process (Lance Lyons) Other · Evolutionary Database Design (Martin Fowler) · Recipes for Continuous Database Integration: Evolutionary Database Development (book, Pramod J Sadalage) · Recipes for Continuous Database Integration (book, Pramod Sadalage) · The Red Gate Guide to SQL Server Team-based Development (book, Phil Factor, Grant Fritchey, Alex Kuznetsov, Mladen Prajdic) · Using SQL Test Database Unit Testing with TeamCity Continuous Integration (Dave Green) · Continuous Database Integration (covers MySQL, Perason Education) Technorati Tags: SQL Server,Continous Integration

    Read the article

  • Oracle Database 11g Implementation Specialist - 14 a 16 Março, 2011

    - by Claudia Costa
    OPN Bootcamp Curso de Especialização em Software OracleCaro Parceiro, O novo programa de parcerias da Oracle assenta na Especialização dos seus seus parceiros. No último ano fiscal muitos parceiros já iniciaram as suas especializações nas temáticas a que estão dedicados e que são prioritárias para o seu negócio. Para apoiar o esforço e dedicação de muitos parceiros na obtenção da certificação dos seus recursos, a equipa local de alianças e canal lançou uma série de iniciativas. Entre elas, a criação deste OPN Bootcamp em conjunto com a Oracle University, especialmente dedicado à formação e preparação para os exames de Implementation, obrigatórios para obter a especialização Oracle Database 11g. Este curso de formação tem o objectivo de preparar os parceiros para o exame de Implementation a realizar já no dia 29 de Março, durante o OPN Satellite Event que terá lugar em Lisboa (outros detalhes sobre este evento serão brevemente comunicados). A sua presença neste curso de preparação nas datas que antecedem o evento OPN Satellite, é fundamental para que os seus técnicos fiquem habilitados a realizar o exame dia 29 de Março com a máxima capacidade e possibilidade de obter resultados positivos. Deste modo, no dia 29 de Março, podem obter a tão desejada certificação, com custos de exame 100% suportados pela Oracle. Contamos com a sua presença! Conteúdo: Oracle Database 11g: 2 Day DBA Release 2 + preparação para o exame 1Z0-154 Oracle Database 11g: Essentials Audiência: - Database Administrators - Technical Administrator- Technical Consultant- Support Engineer Pré Requisitos: Conhecimentos sobre sistema operativo Linux Duração: 3 dias + exame (1 dia)Horário: 9h00 / 18h00Data: 14 a 16 de Março Local: Centro de Formação Oracle Pessoas e Processos Rua do Conde Redondo, 145 - 1º - LisboaAcesso: Metro do Marquês de Pombal Custos de participação: 140€ pax/dia = 420€/pax (3 dias)* - Este preço inclui o exame de Implementation *Custo final para parceiro. Já inclui financiamento da equipa de Alianças e Canal Data e Local do Exame: 29 de Março - Instalações da Oracle University _______________________________________________________________________________________ Inscrições Gratuitas. Lugares Limitados.Reserve já o seu lugar : Email   Para mais informações sobre inscrição: Vítor PereiraFixo: 21 778 38 39 Móvel: 933777099 Fax: 21 778 38 40Para outras informações, por favor contacte: Claudia Costa / 21 423 50 27

    Read the article

  • SQL SERVER – Repair a SQL Server Database Using a Transaction Log Explorer

    - by Pinal Dave
    In this blog, I’ll show how to use ApexSQL Log, a SQL Server transaction log viewer. You can download it for free, install, and play along. But first, let’s describe some disaster recovery scenarios where it’s useful. About SQL Server disaster recovery Along with database development and administration, you must work on a good recovery plan. Disasters do happen and no one’s immune. What you can do is take all actions needed to be ready for a disaster and go through it with minimal data loss and downtime. Besides creating a recovery plan, it’s necessary to have a list of steps that will be executed when a disaster occurs and to test them before a disaster. This way, you’ll know that the plan is good and viable. Testing can also be used as training for all team members, so they can all understand and execute it when the time comes. It will show how much time is needed to have your servers fully functional again and how much data you can lose in a real-life situation. If these don’t meet recovery-time and recovery-point objectives, the plan needs to be improved. Keep in mind that all major changes in environment configuration, business strategy, and recovery objectives require a new recovery plan testing, as these changes most probably induce a recovery plan changing and tweaking. What is a good SQL Server disaster recovery plan? A good SQL Server disaster recovery strategy starts with planning SQL Server database backups. An efficient strategy is to create a full database backup periodically. Between two successive full database backups, you can create differential database backups. It is essential is to create transaction log backups regularly between full database backups. Keep in mind that transaction log backups can be created only on databases in the full recovery model. In other words, a simple, but efficient backup strategy would be a full database backup every night, a transaction log backup every hour, or every 15 minutes. The frequency depends on how much data you can afford to lose and how busy the database is. Another option, instead of creating a full database backup every night, is to create a full database backup once a week (e.g. on Friday at midnight) and differential database backup every night until next Friday when you will create a full database backup again. Once you create your SQL Server database backup strategy, schedule the backups. You can do that easily using SQL Server maintenance plans. Why are transaction logs important? Transaction log backups contain transactions executed on a SQL Server database. They provide enough information to undo and redo the transactions and roll back or forward the database to a point in time. In SQL Server disaster recovery situations, transaction logs enable to repair a SQL Server database and bring it to the state before the disaster. Be aware that even with regular backups, there will be some data missing. These are the transactions made between the last transaction log backup and the time of the disaster. In some situations, to repair your SQL Server database it’s not necessary to re-create the database from its last backup. The database might still be online and all you need to do is roll back several transactions, such as wrong update, insert, or delete. The restore to a point in time feature is available in SQL Server, but for large databases, it is very time-consuming, as SQL Server first restores a full database backup, and then restores transaction log backups, one after another, up to the recovery point. During that time, the database is unavailable. This is where a SQL Server transaction log viewer can help. For optimal recovery, besides having a database in the full recovery model, it’s important that you haven’t manually truncated the online transaction log. This ensures that all transactions made after the last transaction log backup are still in the online transaction log. All you have to do is read and replay them. How to read a SQL Server transaction log? SQL Server doesn’t provide an option to read transaction logs. There are several SQL Server commands and functions that read the content of a transaction log file (fn_dblog, fn_dump_dblog, and DBCC PAGE), but they are undocumented. They require T-SQL knowledge, return a large number of not easy to read and understand columns, sometimes in binary or hexadecimal format. Another challenge is reading UPDATE statements, as it’s necessary to match it to a value in the MDF file. When you finally read the transactions executed, you have to create a script for it. How to easily repair a SQL database? The easiest solution is to use a transaction log reader that will not only read the transactions in the transaction log files, but also automatically create scripts for the read transactions. In the following example, I will show how to use ApexSQL Log to repair a SQL database after a crash. If a database has crashed and both MDF and LDF files are lost, you have to rely on the full database backup and all subsequent transaction log backups. In another scenario, the MDF file is lost, but the LDF file is available. First, restore the last full database backup on SQL Server using SQL Server Management Studio. I’ll name it Restored_AW2014. Then, start ApexSQL Log It will automatically detect all local servers. If not, click the icon right to the Server drop-down list, or just type in the SQL Server instance name. Select the Windows or SQL Server authentication type and select the Restored_AW2014 database from the database drop-down list. When all options are set, click Next. ApexSQL Log will show the online transaction log file. Now, click Add and add all transaction log backups created after the full database backup I used to restore the database. In case you don’t have transaction log backups, but the LDF file hasn’t been lost during the SQL Server disaster, add it using Add.   To repair a SQL database to a point in time, ApexSQL Log needs to read and replay all the transactions in the transaction log backups (or the LDF file saved after the disaster). That’s why I selected the Whole transaction log option in the Filter setup. ApexSQL Log offers a range of various filters, which are useful when you need to read just specific transactions. You can filter transactions by the time of the transactions, operation type (e.g. to read only data inserts), table name, SQL Server login that made the transaction, etc. In this scenario, to repair a SQL database, I’ll check all filters and make sure that all transactions are included. In the Operations tab, select all schema operations (DDL). If you omit these, only the data changes will be read so if there were any schema changes, such as a new function created, or an existing table modified, they will be ignored and database will not be properly repaired. The data repair for modified tables will fail. In the Tables tab, I’ll make sure all tables are selected. I will uncheck the Show operations on dropped tables option, to reduce the number of transactions. Click Next. ApexSQL Log offers three options. Select Open results in grid, to get a user-friendly presentation of the transactions. As you can see, details are shown for every transaction, including the old and new values for updated columns, which are clearly highlighted. Now, select them all and then create a redo script by clicking the Create redo script icon in the menu.   For a large number of transactions and in a critical situation, when acting fast is a must, I recommend using the Export results to file option. It will save some time, as the transactions will be directly scripted into a redo file, without showing them in the grid first. Select Generate reconstruction (REDO) script , change the output path if you want, and click Finish. After the redo T-SQL script is created, ApexSQL Log shows the redo script summary: The third option will create a command line statement for a batch file that you can use to schedule execution, which is not really applicable when you repair a SQL database, but quite useful in daily auditing scenarios. To repair your SQL database, all you have to do is execute the generated redo script using an integrated developer environment tool such as SQL Server Management Studio or any other, against the restored database. You can find more information about how to read SQL Server transaction logs and repair a SQL database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered, restored, or transactions rolled back. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Representation of data in application versus database

    - by user1815201
    I'm going to make an application that will be given data to put in a database. The data will for the most part be the same, but the way it is formatted will vary a lot (could be in anything from text files to .xls to .doc). I'm not a very experienced developer, but I can see some potential issues and I want to minimize them. First off I have decided to use the DAO pattern, so that I can easily support new file formats or file suddenly formatted in different ways. What I really wonder about though, is how I should manage the data itself within my application. I'm thinking that the database DAO should have models representing each table of the database with the same relations between them, to make the uploading process easy. But should the filesystem DAO's have to use the same models? I can imaging that when the database changes, the change will suddenly propagate throughout the entire system, all DAOs and models alike. And that is obviously a bad thing. I'm a little bit tired and out of time. Will update with what ever questions you have. Thanks!

    Read the article

  • Designing a Database Application with OOP

    - by Tim C
    I often develop SQL database applications using Linq, and my methodology is to build model classes to represent each table, and each table that needs inserting or updating gets a Save() method (which either does an InsertOnSubmit() or SubmitChanges(), depending on the state of the object). Often, when I need to represent a collection of records, I'll create a class that inherits from a List-like object of the atomic class. ex. public class CustomerCollection : CoreCollection<Customer> { } Recently, I was working on an application where end-users were experiencing slowness, where each of the objects needed to be saved to the database if they met a certain criteria. My Save() method was slow, presumably because I was making all kinds of round-trips to the server, and calling DataContext.SubmitChanges() after each atomic save. So, the code might have looked something like this foreach(Customer c in customerCollection) { if(c.ShouldSave()) { c.Save(); } } I worked through multiple strategies to optimize, but ultimately settled on passing a big string of data to a SQL stored procedure, where the string has all the data that represents the records I was working with - it might look something like this: CustomerID:34567;CurrentAddress:23 3rd St;CustomerID:23456;CurrentAddress:123 4th St So, SQL server parses the string, performs the logic to determine appropriateness of save, and then Inserts, Updates, or Ignores. With C#/Linq doing this work, it saved 5-10 records / s. When SQL does it, I get 100 records / s, so there is no denying the Stored Proc is more efficient; however, I hate the solution because it doesn't seem nearly as clean or safe. My real concern is that I don't have any better solutions that hold a candle to the performance of the stored proc solution. Am I doing something obviously wrong in how I'm thinking about designing database applications? Are there better ways of designing database applications?

    Read the article

  • SQL SERVER – Enumerations in Relational Database – Best Practice

    - by pinaldave
    Marko Parkkola This article has been submitted by Marko Parkkola, Data systems designer at Saarionen Oy, Finland. Marko is excellent developer and always thinking at next level. You can read his earlier comment which created very interesting discussion here: SQL SERVER- IF EXISTS(Select null from table) vs IF EXISTS(Select 1 from table). I must express my special thanks to Marko for sending this best practice for Enumerations in Relational Database. He has really wrote excellent piece here and welcome comments here. Enumerations in Relational Database This is a subject which is very basic thing in relational databases but often not very well understood and sometimes badly implemented. There are of course many ways to do this but I concentrate only two cases, one which is “the right way” and one which is definitely wrong way. The concept Let’s say we have table Person in our database. Person has properties/fields like Firstname, Lastname, Birthday and so on. Then there’s a field that tells person’s marital status and let’s name it the same way; MaritalStatus. Now MaritalStatus is an enumeration. In C# I would definitely make it an enumeration with values likes Single, InRelationship, Married, Divorced. Now here comes the problem, SQL doesn’t have enumerations. The wrong way This is, in my opinion, absolutely the wrong way to do this. It has one upside though; you’ll see the enumeration’s description instantly when you do simple SELECT query and you don’t have to deal with mysterious values. There’s plenty of downsides too and one would be database fragmentation. Consider this (I’ve left all indexes and constraints out of the query on purpose). CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] NVARCHAR(10) ) You have nvarchar(20) field in the table that tells the marital status. Obvious problem with this is that what if you create a new value which doesn’t fit into 20 characters? You’ll have to come and alter the table. There are other problems also but I’ll leave those for the reader to think about. The correct way Here’s how I’ve done this in many projects. This model still has one problem but it can be alleviated in the application layer or with CHECK constraints if you like. First I will create a namespace table which tells the name of the enumeration. I will add one row to it too. I’ll write all the indexes and constraints here too. CREATE TABLE [CodeNamespace] ( [Id] INT IDENTITY(1, 1), [Name] NVARCHAR(100) NOT NULL, CONSTRAINT [PK_CodeNamespace] PRIMARY KEY ([Id]), CONSTRAINT [IXQ_CodeNamespace_Name] UNIQUE NONCLUSTERED ([Name]) ) GO INSERT INTO [CodeNamespace] SELECT 'MaritalStatus' GO Then I create a table that holds the actual values and which reference to namespace table in order to group the values under different namespaces. I’ll add couple of rows here too. CREATE TABLE [CodeValue] ( [CodeNamespaceId] INT NOT NULL, [Value] INT NOT NULL, [Description] NVARCHAR(100) NOT NULL, [OrderBy] INT, CONSTRAINT [PK_CodeValue] PRIMARY KEY CLUSTERED ([CodeNamespaceId], [Value]), CONSTRAINT [FK_CodeValue_CodeNamespace] FOREIGN KEY ([CodeNamespaceId]) REFERENCES [CodeNamespace] ([Id]) ) GO -- 1 is the 'MaritalStatus' namespace INSERT INTO [CodeValue] SELECT 1, 1, 'Single', 1 INSERT INTO [CodeValue] SELECT 1, 2, 'In relationship', 2 INSERT INTO [CodeValue] SELECT 1, 3, 'Married', 3 INSERT INTO [CodeValue] SELECT 1, 4, 'Divorced', 4 GO Now there’s four columns in CodeValue table. CodeNamespaceId tells under which namespace values belongs to. Value tells the enumeration value which is used in Person table (I’ll show how this is done below). Description tells what the value means. You can use this, for example, column in UI’s combo box. OrderBy tells if the values needs to be ordered in some way when displayed in the UI. And here’s the Person table again now with correct columns. I’ll add one row here to show how enumerations are to be used. CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] INT ) GO INSERT INTO [Person] SELECT 'Marko', 'Parkkola', '1977-03-04', 3 GO Now I said earlier that there is one problem with this. MaritalStatus column doesn’t have any database enforced relationship to the CodeValue table so you can enter any value you like into this field. I’ve solved this problem in the application layer by selecting all the values from the CodeValue table and put them into a combobox / dropdownlist (with Value field as value and Description as text) so the end user can’t enter any illegal values; and of course I’ll check the entered value in data access layer also. I said in the “The wrong way” section that there is one benefit to it. In fact, you can have the same benefit here by using a simple view, which I schema bound so you can even index it if you like. CREATE VIEW [dbo].[Person_v] WITH SCHEMABINDING AS SELECT p.[Firstname], p.[Lastname], p.[BirthDay], c.[Description] MaritalStatus FROM [dbo].[Person] p JOIN [dbo].[CodeValue] c ON p.[MaritalStatus] = c.[Value] JOIN [dbo].[CodeNamespace] n ON n.[Id] = c.[CodeNamespaceId] AND n.[Name] = 'MaritalStatus' GO -- Select from View SELECT * FROM [dbo].[Person_v] GO This is excellent write up byMarko Parkkola. Do you have this kind of design setup at your organization? Let us know your opinion. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Database, DBA, Readers Contribution, Software Development, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >