Search Results

Search found 32538 results on 1302 pages for 'database restore'.

Page 16/1302 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • jBASE 4.1 Database Noobster Questions

    - by Steve Johnson
    I am a software developer with devlopment experience in C#, C++ .Net alongwith SQL Server 2005/08, Oracle and mysql. But somehow i dont get jBASE to work at Windows XP SP3 machine. My goal is setup user accounts, create database on a JBASE ainstallation, authenticate and backup/restore few table via a C++ program. And i dont need to do it with builtin backup/restore tools of jBASE. I am able to install jBASe 4.1 aling with all its accessories on my WINXPSP3 machine. I was able to run the jSlimserver and TEMENOUS server along with licnesing server. I was able to add the license key as well. But after that what i was supposed to do? i have no idea about it. The docs and online help doesnt answer a simple question of how to create a database! The google search results from the jbase site all go to the 404 Pages! Can a jBASE expert guide to the following steps: Create a jBASE database. Create users Authenticate via those users Connect to database Create tables and insert data. Connect via a C++ or C# program to connect to jBASE DB and backup/restore tables. I know that this is too much too ask but i dont get to get the JBASE system. I cant get it to work on my System somehow. Btw, jdc and jexloree doesnt seem to do anything. I have checked that enironmental variables for jBASE are setup correctly and i have verified them. There are no extra JRE or JDK installations on my system. Besides all that, only licensing client, slim server and temenous server seem to run and listen for connections and no other execuatable ever seems to work. A simple tutorial to achieve the objective will be highly appreciated. Also if anyone can point out the mistake that i have done or anything i might need to check, then please do so. I will be highly encouraged and obliged. Thanks Steve

    Read the article

  • Cache the result of a MySQLdb database query in memory

    - by ensnare
    Our application fetches the correct database server from a pool of database servers. So each query is really 2 queries, and they look like this: Fetch the correct DB server Execute the query We do this so we can take DB servers online and offline as necessary, as well as for load-balancing. But the first query seems like it could be cached to memory, so it only actually queries the database every 5 or 10 minutes or so. What's the best way to do this? Thanks.

    Read the article

  • Database structure - is mySQL the right choice?

    - by Industrial
    Hi everyone, We are currently planning the database structure of a quite complex e-commerce web app that has flexibility as it's main cornerstone. Our app features a large amount of data (products) and we have run into a slight headache trying to keep performance high without compromizing normalization rules in the database, or leaving our highly beloved flexibility concept behind when integrating product options (also widely known as product attributes or parameters). Based on various references and sources available, we have made up lists on pros and cons of all major and well known database patterns to solve this. After comparing these, we have come up with two final alternatives: EAV (Entity-attribute-value model) : Pros: Database is used for all sorting. Cons: All related queries will include a number of joins between multiple tables in order to complete the collection of data. SLOB (Serialized LOB, also known as Facade?) : Pros: Very flexible. Keeping the number of necessary joins low compared to a EAV design pattern. Easy to update/add/remove data from each product. Cons: All sorting will be done by the application instead of the database. Will use lots of performance (memory?) when big datasets is processed by a large number of users. Our main questions: Which pattern/structure would you use, or maybe even a different solution? Is there better databases besides mySQL available nowadays to accomplish what we want? Thanks a lot! Reference: http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters

    Read the article

  • Small standalone SQL database similar to access in the old days(ie file database)

    - by Ian
    Hi, I am looking for a easy to use and deploy sql type database i can ship with a desktop application. This will be a small application user's can download from my website. In the vb6 days, access was the common database for small desktop apps, what is my option these days? Looking at SQL CE it seems to have a quite a few limitations such as count(distinct) etc SQL express needs to be installed and running as a service (could i include the SQL express deployments in my deployment so the user doesn't even know its been installed? I assume size would then be an issue) SQL 2005/2008 is not an option due to size and licensing restrictions. I would like to use c#, wpf and entity framework. What would seem to be the best options based on your knowledge and experience? Thanks

    Read the article

  • How to Design a SaaS Database

    - by Josh Curren
    I have a web app that I built for a trucking company that I would like to offer as SaaS. What is the best way to design the database? Should I create a new database for each company? Or should I use one database with tables that have a prefix of the company name? Or should I Use one database with one of each table and just add a company id field to the tables? Or is there some other way to do it?

    Read the article

  • SQL SERVER – Fix : Error : 3117 : The log or differential backup cannot be restored because no files

    - by pinaldave
    I received the following email from one of my readers. Dear Pinal, I am new to SQL Server and our regular DBA is on vacation. Our production database had some problem and I have just restored full database backup to production server. When I try to apply log back I am getting following error. I am sure, this is valid log backup file. Screenshot is attached. [Few other details regarding server/ip address removed] Msg 3117, Level 16, State 1, Line 1 The log or differential backup cannot be restored because no files are ready to roll forward. Msg 3013, Level 16, State 1, Line 1 RESTORE LOG is terminating abnormally. Screenshot attached. [Removed as it contained live IP address] Please help immediately. Well I have answered this question in my earlier post, 2 years ago, over here SQL SERVER – Fix : Error : Msg 3117, Level 16, State 4 The log or differential backup cannot be restored because no files are ready to rollforward. However, I will try to explain it a little more this time. For SQL Server database to be used it should in online state. There are multiple states of SQL Server Database. ONLINE (Available – online for data) OFFLINE RESTORING RECOVERING RECOVERY PENDING SUSPECT EMERGENCY (Limited Availability) If the database is online, it means it is active and in operational mode. It will not make sense to apply further log from backup if the operations have continued on this database. The common practice during the backup restore process is to specify the keyword RECOVERY when the database is restored. When RECOVERY keyword is specified, the SQL Server brings back the database online and will not accept any further log backups. However, if you want to restore more than one backup files, i.e. after restoring the full back up if you want to apply further differential or log backup you cannot do that when database is online and already active. You need to have your database in the state where it can further accept the backup data and not the online data request. If the SQL Server is online and also accepts database backup file, then there can be data inconsistency. This is the reason that when there are more than one database backup files to be restored, one has to restore the database with NO RECOVERY keyword in the RESTORE operation. I suggest you all to read one more post written by me earlier. In this post, I explained the time line with image and graphic SQL SERVER – Backup Timeline and Understanding of Database Restore Process in Full Recovery Model. Sample Code for reference: RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorksFull.bak' WITH NORECOVERY; RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorksDiff.bak' WITH RECOVERY; In this post, I am not trying to cover complete backup and recovery. I am just attempting to address one type of error and its resolution. Please test these scenarios on the development server. Playing with live database backup and recovery is always very crucial and needs to be properly planned. Leave a comment here if you need help with this subject. Similar Post: SQL SERVER – Restore Sequence and Understanding NORECOVERY and RECOVERY Note: We will cover Standby Server maintenance and Recovery in another blog post and it is intentionally, not covered this post. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Backup and Restore, SQL Error Messages, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – History of the Database – 5 Years of Blogging at SQLAuthority

    - by pinaldave
    Don’t miss the Contest:Participate in 5th Anniversary Contest   Today is this blog’s birthday, and I want to do a fun, informative blog post. Five years ago this day I started this blog. Intention – my personal web blog. I wrote this blog for me and still today whatever I learn I share here. I don’t want to wander too far off topic, though, so I will write about two of my favorite things – history and databases.  And what better way to cover these two topics than to talk about the history of databases. If you want to be technical, databases as we know them today only date back to the late 1960’s and early 1970’s, when computers began to keep records and store memories.  But the idea of memory storage didn’t just appear 40 years ago – there was a history behind wanting to keep these records. In fact, the written word originated as a way to keep records – ancient man didn’t decide they suddenly wanted to read novels, they needed a way to keep track of the harvest, of their flocks, and of the tributes paid to the local lord.  And that is how writing and the database began.  You could consider the cave paintings from 17,0000 years ago at Lascaux, France, or the clay token from the ancient Sumerians in 8,000 BC to be the first instances of record keeping – and thus databases. If you prefer, you can consider the advent of written language to be the first database.  Many historians believe the first written language appeared in the 37th century BC, with Egyptian hieroglyphics. The ancient Sumerians, not to be outdone, also created their own written language within a few hundred years. Databases could be more closely described as collections of information, in which case the Sumerians win the prize for the first archive.  A collection of 20,000 stone tablets was unearthed in 1964 near the modern day city Tell Mardikh, in Syria.  This ancient database is from 2,500 BC, and appears to be a sort of law library where apprentice-scribes copied important documents.  Further archaeological digs hope to uncover the palace library, and thus an even larger database. Of course, the most famous ancient database would have to be the Royal Library of Alexandria, the great collection of records and wisdom in ancient Egypt.  It was created by Ptolemy I, and existed from 300 BC through 30 AD, when Julius Caesar effectively erased the hard drives when he accidentally set fire to it.  As any programmer knows who has forgotten to hit “save” or has experienced a sudden power outage, thousands of hours of work was lost in a single instant. Databases existed in very similar conditions up until recently.  Cuneiform tablets gave way to papyrus, which led to vellum, and eventually modern paper and the printing press.  Someday the databases we rely on so much today will become another chapter in the history of record keeping.  Who knows what the databases of tomorrow will look like! Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Should we have a database independent SQL like query language in Django?

    - by Yugal Jindle
    Note : I know we have Django ORM already that keeps things database independent and converts to the database specific SQL queries. Once things starts getting complicated it is preferred to write raw SQL queries for better efficiency. When you write raw sql queries your code gets trapped with the database you are using. I also understand its important to use the full power of your database that can-not be achieved with the django orm alone. My Question : Until I use any database specific feature, why should one be trapped with the database. For instance : We have a query with multiple joins and we decided to write a raw sql query. Now, that makes my website postgres specific. Even when I have not used any postgres specific feature. I feel there should be some fake sql language which can translate to any database's sql query. Even Django's ORM can be built over it. So, that if you go out of ORM but not database specific - you can still remain database independent. I asked the same question to Jacob Kaplan Moss (In person) : He advised me to stay with the database that I like and endure its whole power, to which I agree. But my point was not that we should be database independent. My point is we should be database independent until we use a database specific feature. Please explain, why should be there a fake sql layer over the actual sql ?

    Read the article

  • SQL SERVER World Shapefile Download and Upload to Database Spatial Database

    During my recent, training I was asked by a student if I know a place where he can download spatial files for all the countries around the world, as well as if there is a way to upload shape files to a database. Here is a quick tutorial for it.VDS Technologies has all the spatial [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • 11gR2 DB 11.2.0.1 Certified with E-Business Suite on Solaris 10 (x86-64)

    - by Steven Chan
    Oracle Database 11g Release 2 version 11.2.0.1 is now certified with Oracle E-Business Suite 11i (11.5.10.2) and Release 12 (12.0.4 or higher, 12.1.1 or higher) on Oracle Solaris on x86-64 (64-bit) running Solaris 10. This announcement includes:Oracle Database 11gR2 version 11.2.0.1 Oracle Database 11gR2 version 11.2.0.1 Real Application Clusters (RAC) Transparent Data Encryption (TDE) Column Encryption with EBS 11i and R12Advanced Security Option (ASO)/Advanced Networking Option (ANO) Export/Import Process for E-Business Suite 11i and R12 Database Instances Transparent Data Encryption (TDE) Tablespace Encryption

    Read the article

  • EPM 11.1.2 - R&A DATABASE CONNECTIONS DISAPPEAR FROM THE "DATABASE CONNECTION MANAGER

    - by Powder
    When accessing the database connection panel through Reporting and Analysis all previously entered database connection do not appear. This is due to a bug in the Windows SMB2 protocol. To work around this bug you have to disable the protocol. On Windows 2008 the protocol is automatically enabled. This needs to be done on both the servers and the clients. Note that “server” is the server which hosts RAF repository service and RM1 folder, “client” – server which hosts replicated Repository service that accesses repository files via network i.e. \\<server_host>\RM1  In order to disable SMB 2.0 on the server side, follow these steps:  1. Run "regedit" on Windows Server 2008 based computer.  2. Expand and locate the sub tree as follows.  HKLM\System\CurrentControlSet\Services\LanmanServer\Parameters  3. Add a new REG_DWORD key with the name of "Smb2" (without quotation mark)  Value name: Smb2  Value type: REG_DWORD  0 = disabled  1 = enabled 4. Set the value to 0 to disable SMB 2.0, or set it to 1 to re-enable SMB 2.0.  5. Reboot the server.  To disable SMB 2.0 for Windows Vista or Windows Server 2008 systems that are the “client” systems run the following commands:  sc config lanmanworkstation depend= bowser/mrxsmb10/nsi  sc config mrxsmb20 start= disabled  Note there's an extra " " (space) after the "=" sign.  To enable back SMB 2.0 for Windows Vista or Windows Server 2008 systems that  are the “client” systems run the following commands: sc config lanmanworkstation depend= bowser/mrxsmb10/mrxsmb20/nsi  sc config mrxsmb20 start= auto  Again, note there's an extra " " (space) after the "=" sign. 

    Read the article

  • Database design and performance impact

    - by Craige
    I have a database design issue that I'm not quite sure how to approach, nor if the benefits out weigh the costs. I'm hoping some P.SE members can give some feedback on my suggested design, as well as any similar experiences they may have came across. As it goes, I am building an application that has large reporting demands. Speed is an important issue, as there will be peak usages throughout the year. This application/database has a multiple-level, many-to-many relationship. eg object a object b object c object d object b has relationship to object a object c has relationship to object b, a object d has relationship to object c, b, a Theoretically, this could go on for unlimited levels, though logic dictates it could only go so far. My idea here, to speed up reporting, would be to create a syndicate table that acts as a global many-to-many join table. In this table (with the given example), one might see: +----------+-----------+---------+ | child_id | parent_id | type_id | +----------+-----------+---------+ | b | a | 1 | | c | b | 2 | | c | a | 3 | | d | c | 4 | | d | b | 5 | | d | a | 6 | +----------+-----------+---------+ Where a, b, c and d would translate to their respective ID's in their respective tables. So, for ease of reporting all of a which exist on object d, one could query SELECT * FROM `syndicates` ... JOINS TO child and parent tables ... WHERE parent_id=a and type_id=6; rather than having a query with a join to each level up the chain. The Problem This table grows exponentially, and in a given year, could easily grow past 20,000 records for one client. Given multiple clients over multiple years, this table will VERY quickly explode to millions of records and beyond. Now, the database will, in time, be partitioned across multiple servers, but I would like (as most would) to keep the number of servers as low as possible while still offering flexibility. Also writes and updates would be exponentially longer (though possibly not noticeable to the end user) as there would be multiple inserts/updates/scans on this table to keep it in sync. Am I going in the right direction here, or am I way off track. What would you do in a similar situation? This solution seems overly complex, but allows the greatest flexibility and fastest read-operations. Sidenote 1 - This structure allows me to add new levels to the tree easily. Sidenote 2 - The database querying for this database is done through an ORM framework.

    Read the article

  • List of resources for database continuous integration

    - by David Atkinson
    Because there is so little information on database continuous integration out in the wild, I've taken it upon myself to aggregate as much as possible and post the links to this blog. Because it's my area of expertise, this will focus on SQL Server and Red Gate tooling, although I am keen to include any quality articles that discuss the topic in general terms. Please let me know if you find a resource that I haven't listed! General database Continuous Integration · What is Database Continuous Integration? (David Atkinson) · Continuous Integration for SQL Server Databases (Troy Hunt) · Installing NAnt to drive database continuous integration (David Atkinson) · Continuous Integration Tip #3 - Version your Databases as part of your automated build (Doug Rathbone) · How the "migrations" approach makes database continuous integration possible (David Atkinson) · Continuous Integration for the Database (Keith Bloom) Setting up Continuous Integration with Red Gate tools · Continuous integration for databases using Red Gate tools - A technical overview (White Paper, Roger Hart and David Atkinson) · Continuous integration for databases using Red Gate SQL tools (Product pages) · Database continuous integration step by step (David Atkinson) · Database Continuous Integration with Red Gate Tools (video, David Atkinson) · Database schema synchronisation with RedGate (Vincent Brouillet) · Database continuous integration and deployment with Red Gate tools (David Duffett) · Automated database releases with TeamCity and Red Gate (Troy Hunt) · How to build a database from source control (David Atkinson) · Continuous Integration Automated Database Update Process (Lance Lyons) Other · Evolutionary Database Design (Martin Fowler) · Recipes for Continuous Database Integration: Evolutionary Database Development (book, Pramod J Sadalage) · Recipes for Continuous Database Integration (book, Pramod Sadalage) · The Red Gate Guide to SQL Server Team-based Development (book, Phil Factor, Grant Fritchey, Alex Kuznetsov, Mladen Prajdic) · Using SQL Test Database Unit Testing with TeamCity Continuous Integration (Dave Green) · Continuous Database Integration (covers MySQL, Perason Education) Technorati Tags: SQL Server,Continous Integration

    Read the article

  • Oracle Database 11g Implementation Specialist - 14 a 16 Março, 2011

    - by Claudia Costa
    OPN Bootcamp Curso de Especialização em Software OracleCaro Parceiro, O novo programa de parcerias da Oracle assenta na Especialização dos seus seus parceiros. No último ano fiscal muitos parceiros já iniciaram as suas especializações nas temáticas a que estão dedicados e que são prioritárias para o seu negócio. Para apoiar o esforço e dedicação de muitos parceiros na obtenção da certificação dos seus recursos, a equipa local de alianças e canal lançou uma série de iniciativas. Entre elas, a criação deste OPN Bootcamp em conjunto com a Oracle University, especialmente dedicado à formação e preparação para os exames de Implementation, obrigatórios para obter a especialização Oracle Database 11g. Este curso de formação tem o objectivo de preparar os parceiros para o exame de Implementation a realizar já no dia 29 de Março, durante o OPN Satellite Event que terá lugar em Lisboa (outros detalhes sobre este evento serão brevemente comunicados). A sua presença neste curso de preparação nas datas que antecedem o evento OPN Satellite, é fundamental para que os seus técnicos fiquem habilitados a realizar o exame dia 29 de Março com a máxima capacidade e possibilidade de obter resultados positivos. Deste modo, no dia 29 de Março, podem obter a tão desejada certificação, com custos de exame 100% suportados pela Oracle. Contamos com a sua presença! Conteúdo: Oracle Database 11g: 2 Day DBA Release 2 + preparação para o exame 1Z0-154 Oracle Database 11g: Essentials Audiência: - Database Administrators - Technical Administrator- Technical Consultant- Support Engineer Pré Requisitos: Conhecimentos sobre sistema operativo Linux Duração: 3 dias + exame (1 dia)Horário: 9h00 / 18h00Data: 14 a 16 de Março Local: Centro de Formação Oracle Pessoas e Processos Rua do Conde Redondo, 145 - 1º - LisboaAcesso: Metro do Marquês de Pombal Custos de participação: 140€ pax/dia = 420€/pax (3 dias)* - Este preço inclui o exame de Implementation *Custo final para parceiro. Já inclui financiamento da equipa de Alianças e Canal Data e Local do Exame: 29 de Março - Instalações da Oracle University _______________________________________________________________________________________ Inscrições Gratuitas. Lugares Limitados.Reserve já o seu lugar : Email   Para mais informações sobre inscrição: Vítor PereiraFixo: 21 778 38 39 Móvel: 933777099 Fax: 21 778 38 40Para outras informações, por favor contacte: Claudia Costa / 21 423 50 27

    Read the article

  • SQL SERVER – Repair a SQL Server Database Using a Transaction Log Explorer

    - by Pinal Dave
    In this blog, I’ll show how to use ApexSQL Log, a SQL Server transaction log viewer. You can download it for free, install, and play along. But first, let’s describe some disaster recovery scenarios where it’s useful. About SQL Server disaster recovery Along with database development and administration, you must work on a good recovery plan. Disasters do happen and no one’s immune. What you can do is take all actions needed to be ready for a disaster and go through it with minimal data loss and downtime. Besides creating a recovery plan, it’s necessary to have a list of steps that will be executed when a disaster occurs and to test them before a disaster. This way, you’ll know that the plan is good and viable. Testing can also be used as training for all team members, so they can all understand and execute it when the time comes. It will show how much time is needed to have your servers fully functional again and how much data you can lose in a real-life situation. If these don’t meet recovery-time and recovery-point objectives, the plan needs to be improved. Keep in mind that all major changes in environment configuration, business strategy, and recovery objectives require a new recovery plan testing, as these changes most probably induce a recovery plan changing and tweaking. What is a good SQL Server disaster recovery plan? A good SQL Server disaster recovery strategy starts with planning SQL Server database backups. An efficient strategy is to create a full database backup periodically. Between two successive full database backups, you can create differential database backups. It is essential is to create transaction log backups regularly between full database backups. Keep in mind that transaction log backups can be created only on databases in the full recovery model. In other words, a simple, but efficient backup strategy would be a full database backup every night, a transaction log backup every hour, or every 15 minutes. The frequency depends on how much data you can afford to lose and how busy the database is. Another option, instead of creating a full database backup every night, is to create a full database backup once a week (e.g. on Friday at midnight) and differential database backup every night until next Friday when you will create a full database backup again. Once you create your SQL Server database backup strategy, schedule the backups. You can do that easily using SQL Server maintenance plans. Why are transaction logs important? Transaction log backups contain transactions executed on a SQL Server database. They provide enough information to undo and redo the transactions and roll back or forward the database to a point in time. In SQL Server disaster recovery situations, transaction logs enable to repair a SQL Server database and bring it to the state before the disaster. Be aware that even with regular backups, there will be some data missing. These are the transactions made between the last transaction log backup and the time of the disaster. In some situations, to repair your SQL Server database it’s not necessary to re-create the database from its last backup. The database might still be online and all you need to do is roll back several transactions, such as wrong update, insert, or delete. The restore to a point in time feature is available in SQL Server, but for large databases, it is very time-consuming, as SQL Server first restores a full database backup, and then restores transaction log backups, one after another, up to the recovery point. During that time, the database is unavailable. This is where a SQL Server transaction log viewer can help. For optimal recovery, besides having a database in the full recovery model, it’s important that you haven’t manually truncated the online transaction log. This ensures that all transactions made after the last transaction log backup are still in the online transaction log. All you have to do is read and replay them. How to read a SQL Server transaction log? SQL Server doesn’t provide an option to read transaction logs. There are several SQL Server commands and functions that read the content of a transaction log file (fn_dblog, fn_dump_dblog, and DBCC PAGE), but they are undocumented. They require T-SQL knowledge, return a large number of not easy to read and understand columns, sometimes in binary or hexadecimal format. Another challenge is reading UPDATE statements, as it’s necessary to match it to a value in the MDF file. When you finally read the transactions executed, you have to create a script for it. How to easily repair a SQL database? The easiest solution is to use a transaction log reader that will not only read the transactions in the transaction log files, but also automatically create scripts for the read transactions. In the following example, I will show how to use ApexSQL Log to repair a SQL database after a crash. If a database has crashed and both MDF and LDF files are lost, you have to rely on the full database backup and all subsequent transaction log backups. In another scenario, the MDF file is lost, but the LDF file is available. First, restore the last full database backup on SQL Server using SQL Server Management Studio. I’ll name it Restored_AW2014. Then, start ApexSQL Log It will automatically detect all local servers. If not, click the icon right to the Server drop-down list, or just type in the SQL Server instance name. Select the Windows or SQL Server authentication type and select the Restored_AW2014 database from the database drop-down list. When all options are set, click Next. ApexSQL Log will show the online transaction log file. Now, click Add and add all transaction log backups created after the full database backup I used to restore the database. In case you don’t have transaction log backups, but the LDF file hasn’t been lost during the SQL Server disaster, add it using Add.   To repair a SQL database to a point in time, ApexSQL Log needs to read and replay all the transactions in the transaction log backups (or the LDF file saved after the disaster). That’s why I selected the Whole transaction log option in the Filter setup. ApexSQL Log offers a range of various filters, which are useful when you need to read just specific transactions. You can filter transactions by the time of the transactions, operation type (e.g. to read only data inserts), table name, SQL Server login that made the transaction, etc. In this scenario, to repair a SQL database, I’ll check all filters and make sure that all transactions are included. In the Operations tab, select all schema operations (DDL). If you omit these, only the data changes will be read so if there were any schema changes, such as a new function created, or an existing table modified, they will be ignored and database will not be properly repaired. The data repair for modified tables will fail. In the Tables tab, I’ll make sure all tables are selected. I will uncheck the Show operations on dropped tables option, to reduce the number of transactions. Click Next. ApexSQL Log offers three options. Select Open results in grid, to get a user-friendly presentation of the transactions. As you can see, details are shown for every transaction, including the old and new values for updated columns, which are clearly highlighted. Now, select them all and then create a redo script by clicking the Create redo script icon in the menu.   For a large number of transactions and in a critical situation, when acting fast is a must, I recommend using the Export results to file option. It will save some time, as the transactions will be directly scripted into a redo file, without showing them in the grid first. Select Generate reconstruction (REDO) script , change the output path if you want, and click Finish. After the redo T-SQL script is created, ApexSQL Log shows the redo script summary: The third option will create a command line statement for a batch file that you can use to schedule execution, which is not really applicable when you repair a SQL database, but quite useful in daily auditing scenarios. To repair your SQL database, all you have to do is execute the generated redo script using an integrated developer environment tool such as SQL Server Management Studio or any other, against the restored database. You can find more information about how to read SQL Server transaction logs and repair a SQL database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered, restored, or transactions rolled back. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Representation of data in application versus database

    - by user1815201
    I'm going to make an application that will be given data to put in a database. The data will for the most part be the same, but the way it is formatted will vary a lot (could be in anything from text files to .xls to .doc). I'm not a very experienced developer, but I can see some potential issues and I want to minimize them. First off I have decided to use the DAO pattern, so that I can easily support new file formats or file suddenly formatted in different ways. What I really wonder about though, is how I should manage the data itself within my application. I'm thinking that the database DAO should have models representing each table of the database with the same relations between them, to make the uploading process easy. But should the filesystem DAO's have to use the same models? I can imaging that when the database changes, the change will suddenly propagate throughout the entire system, all DAOs and models alike. And that is obviously a bad thing. I'm a little bit tired and out of time. Will update with what ever questions you have. Thanks!

    Read the article

  • Designing a Database Application with OOP

    - by Tim C
    I often develop SQL database applications using Linq, and my methodology is to build model classes to represent each table, and each table that needs inserting or updating gets a Save() method (which either does an InsertOnSubmit() or SubmitChanges(), depending on the state of the object). Often, when I need to represent a collection of records, I'll create a class that inherits from a List-like object of the atomic class. ex. public class CustomerCollection : CoreCollection<Customer> { } Recently, I was working on an application where end-users were experiencing slowness, where each of the objects needed to be saved to the database if they met a certain criteria. My Save() method was slow, presumably because I was making all kinds of round-trips to the server, and calling DataContext.SubmitChanges() after each atomic save. So, the code might have looked something like this foreach(Customer c in customerCollection) { if(c.ShouldSave()) { c.Save(); } } I worked through multiple strategies to optimize, but ultimately settled on passing a big string of data to a SQL stored procedure, where the string has all the data that represents the records I was working with - it might look something like this: CustomerID:34567;CurrentAddress:23 3rd St;CustomerID:23456;CurrentAddress:123 4th St So, SQL server parses the string, performs the logic to determine appropriateness of save, and then Inserts, Updates, or Ignores. With C#/Linq doing this work, it saved 5-10 records / s. When SQL does it, I get 100 records / s, so there is no denying the Stored Proc is more efficient; however, I hate the solution because it doesn't seem nearly as clean or safe. My real concern is that I don't have any better solutions that hold a candle to the performance of the stored proc solution. Am I doing something obviously wrong in how I'm thinking about designing database applications? Are there better ways of designing database applications?

    Read the article

  • What relational database system should I learn? [closed]

    - by acidzombie24
    At the moment i know sqlite (my favorite), mysql (its ok, i get annoyed) and i do not want to learn ms/t sql (it only allows one nullable row if the column is unique). I am thinking about learning a new database system. My requirements for it is Must allow multiple connections at once (read and write) All or data i choose must be ACID compliant Performance should be good. I have a 17gb table in one project. It should perform well on read and transactional writes. With mysql it took hours to restore it and there were no foreign keys on that specific table. It only finished in a workday because i found a suggestion to adjust a setting which i think was key-buffer. And it still took hours Unique columns that allow more then one row to be null. I shouldn't have to say it but dammit MS. Allows one to make ongoing backups. Something like 'binary logs'. Some relatively small amounts of data i can grab and apply it to my local db to have it in sync with the one on the server. Table joins. I rather not write a bunch of queries to simulate a join What I would like but is not required Foreign keys. This may be a requirement later Open sourced Fair tool support. So i can measure queries, easily backup/restore, etc .NET and C (or C++) interface. (I seen one that uses raw tcp with JSON which was okish) Good subquery support. Once i was working with an older version of mysql (i believe <5.1 but it could have been 5.1) and i had to write many queries to do one query because it couldn't do subqueries. Or maybe it couldnt do it efficiently and died bc of memory limitations with a huge dataset. What db system should i learn?

    Read the article

  • SQL SERVER – Enumerations in Relational Database – Best Practice

    - by pinaldave
    Marko Parkkola This article has been submitted by Marko Parkkola, Data systems designer at Saarionen Oy, Finland. Marko is excellent developer and always thinking at next level. You can read his earlier comment which created very interesting discussion here: SQL SERVER- IF EXISTS(Select null from table) vs IF EXISTS(Select 1 from table). I must express my special thanks to Marko for sending this best practice for Enumerations in Relational Database. He has really wrote excellent piece here and welcome comments here. Enumerations in Relational Database This is a subject which is very basic thing in relational databases but often not very well understood and sometimes badly implemented. There are of course many ways to do this but I concentrate only two cases, one which is “the right way” and one which is definitely wrong way. The concept Let’s say we have table Person in our database. Person has properties/fields like Firstname, Lastname, Birthday and so on. Then there’s a field that tells person’s marital status and let’s name it the same way; MaritalStatus. Now MaritalStatus is an enumeration. In C# I would definitely make it an enumeration with values likes Single, InRelationship, Married, Divorced. Now here comes the problem, SQL doesn’t have enumerations. The wrong way This is, in my opinion, absolutely the wrong way to do this. It has one upside though; you’ll see the enumeration’s description instantly when you do simple SELECT query and you don’t have to deal with mysterious values. There’s plenty of downsides too and one would be database fragmentation. Consider this (I’ve left all indexes and constraints out of the query on purpose). CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] NVARCHAR(10) ) You have nvarchar(20) field in the table that tells the marital status. Obvious problem with this is that what if you create a new value which doesn’t fit into 20 characters? You’ll have to come and alter the table. There are other problems also but I’ll leave those for the reader to think about. The correct way Here’s how I’ve done this in many projects. This model still has one problem but it can be alleviated in the application layer or with CHECK constraints if you like. First I will create a namespace table which tells the name of the enumeration. I will add one row to it too. I’ll write all the indexes and constraints here too. CREATE TABLE [CodeNamespace] ( [Id] INT IDENTITY(1, 1), [Name] NVARCHAR(100) NOT NULL, CONSTRAINT [PK_CodeNamespace] PRIMARY KEY ([Id]), CONSTRAINT [IXQ_CodeNamespace_Name] UNIQUE NONCLUSTERED ([Name]) ) GO INSERT INTO [CodeNamespace] SELECT 'MaritalStatus' GO Then I create a table that holds the actual values and which reference to namespace table in order to group the values under different namespaces. I’ll add couple of rows here too. CREATE TABLE [CodeValue] ( [CodeNamespaceId] INT NOT NULL, [Value] INT NOT NULL, [Description] NVARCHAR(100) NOT NULL, [OrderBy] INT, CONSTRAINT [PK_CodeValue] PRIMARY KEY CLUSTERED ([CodeNamespaceId], [Value]), CONSTRAINT [FK_CodeValue_CodeNamespace] FOREIGN KEY ([CodeNamespaceId]) REFERENCES [CodeNamespace] ([Id]) ) GO -- 1 is the 'MaritalStatus' namespace INSERT INTO [CodeValue] SELECT 1, 1, 'Single', 1 INSERT INTO [CodeValue] SELECT 1, 2, 'In relationship', 2 INSERT INTO [CodeValue] SELECT 1, 3, 'Married', 3 INSERT INTO [CodeValue] SELECT 1, 4, 'Divorced', 4 GO Now there’s four columns in CodeValue table. CodeNamespaceId tells under which namespace values belongs to. Value tells the enumeration value which is used in Person table (I’ll show how this is done below). Description tells what the value means. You can use this, for example, column in UI’s combo box. OrderBy tells if the values needs to be ordered in some way when displayed in the UI. And here’s the Person table again now with correct columns. I’ll add one row here to show how enumerations are to be used. CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] INT ) GO INSERT INTO [Person] SELECT 'Marko', 'Parkkola', '1977-03-04', 3 GO Now I said earlier that there is one problem with this. MaritalStatus column doesn’t have any database enforced relationship to the CodeValue table so you can enter any value you like into this field. I’ve solved this problem in the application layer by selecting all the values from the CodeValue table and put them into a combobox / dropdownlist (with Value field as value and Description as text) so the end user can’t enter any illegal values; and of course I’ll check the entered value in data access layer also. I said in the “The wrong way” section that there is one benefit to it. In fact, you can have the same benefit here by using a simple view, which I schema bound so you can even index it if you like. CREATE VIEW [dbo].[Person_v] WITH SCHEMABINDING AS SELECT p.[Firstname], p.[Lastname], p.[BirthDay], c.[Description] MaritalStatus FROM [dbo].[Person] p JOIN [dbo].[CodeValue] c ON p.[MaritalStatus] = c.[Value] JOIN [dbo].[CodeNamespace] n ON n.[Id] = c.[CodeNamespaceId] AND n.[Name] = 'MaritalStatus' GO -- Select from View SELECT * FROM [dbo].[Person_v] GO This is excellent write up byMarko Parkkola. Do you have this kind of design setup at your organization? Let us know your opinion. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Database, DBA, Readers Contribution, Software Development, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Building a database installer with WiX, datadude and Visual Studio 2010

    - by jamiet
    Today I have been using Windows Installer XML (WiX) to build an installer (.msi file) that would install a SQL Server database on a server of my choosing; the source code for that database lives in datadude (a tool which you may know by one of quite a few other names). The basis for this work was a most excellent blog post by Duke Kamstra entitled Implementing a WIX installer that calls the GDR version of VSDBCMD.EXE which coves the delicate intricacies of doing this – particularly how to call Vsdbcmd.exe in a CustomAction. Unfortunately there are a couple of things wrong with Duke’s post: Searching for “datadude wix” didn’t turn it up in the first page of search results and hence it took me a long time to find it. And I knew that it existed. If someone else were after a post on using WiX with datadude its likely that they would never have come across Duke’s post and that would be a great shame because its the definitive post on the matter. It was written in October 2009 and had not been updated for Visual Studio 2010. Well, this blog post is an attempt to solve those problems. Hopefully I’ve solved the first one just by following a few of my blogging SEO tips while writing this blog post, in the rest of it I will explain how I took Duke’s code and updated it to work in Visual Studio 2010. If you need to build a database installer using WiX, datadude and Visual Studio 2010 then you still need to follow Duke’s blog post so go and do that now. Below are the amendments that I made that enabled the project to get built in Visual Studio 2010: In VS2010 datadude’s output files have changed from being called Database.<suffix> to <ProjectName>_Database.<suffix>. Duke’s code was referencing the old file name formats. Duke used $(var.SolutionDir) and relative paths to point to datadude artefacts I have replaced these with Votive Project References http://wix.sourceforge.net/manual-wix3/votive_project_references.htm I commented out all references to MicrosoftSqlTypesDbschema in DatabaseArtifacts.wxi. I don't think this is produced in VS2010 (I may be wrong about that but it wasn't in the output from my project) Similarly I commented out component MicrosoftSqlTypesDbschema in VsdbcmdArtifacts.wxi. It wasn't where Duke's code said it should have been so am assuming/hoping it isn't needed. Duke's ?define block to work out appropriate SrcArchPath actually wasn't working for me (i.e. <?if $(var.Platform)=x64 ?> was evaluating to false)  so I just took out the conditional stuff and declared the path explicitly to the “Program Files (x86)” path. The old code is still there though if you need to put it back. None of the <RegistrySearch> stuff is needed for VS2010 - so I commented it all out! Changed to use /manifest option rather than /model option on vsdbcmd.exe command-line. Personal preference is all! Added a new component in order to bundle along the vsdbcmd.exe.config file Made the install of the Custom Action dependent on the relevant feature being selected for install. This one is actually really important – deselecting the database feature for installation does not, by default, stop the CustomAction from executing and so would cause an error - so that scenario needs to be catered for I have made my amended solution available for download at: http://cid-550f681dad532637.office.live.com/self.aspx/Public/BlogShare/20110210/InstallMyDatabase.zip It contains two projects: the WiX project and the datadude project that is the source to be deployed (for demo purposes it only contains one table). I have also made the .msi available although in order that it gets through file blockers I changed the name from InstallMyDatabase.msi to InstallMyDatabase.ms_ – simply rename the file back once you have downloaded it from: http://cid-550f681dad532637.office.live.com/self.aspx/Public/BlogShare/20110210/InstallMyDatabase.ms%5E_ .You can try it out for yourself – the only thing it does is dump the files into %Program Files%\MyDatabase and uses them to install a database onto a server of your choosing with a name of your choosing - no damaging side-affects. I will caveat this by saying “it works on my machine” and, not having access to a plethora of different machines, I haven’t tested it anywhere else. One potential issue that I know of is that Vsdbcmd.exe has a dependency on SQL Server CE although if you have SQL Server tools or Visual Studio installed you should be fine. Unfortunately its not possible to bundle along the SQL Server CE installer in the .msi because Windows will not allow you to call one installer from inside another – the recommended way to get around this problem is to build a bootstrapper to bundle the whole lot together but doing that is outside the scope of this blog post. If you discover any other issues then please let me know. Here are the screenshots from the installer: And once installed…. Hope this is useful! @jamiet 

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >