Search Results

Search found 7957 results on 319 pages for 'production databases'.

Page 10/319 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Multiple test Active Directory envirovments hand in hand with production domain controllers

    - by MadBoy
    What's the best approach of having multiple test environments next to production one? We have multiple programming teams that build solutions that use Active Directory very often. We have tried different approaches, starting with their own domain controllers (in same subnet), or additional OU's in our production AD that the team gets control over and can create/delete accounts within that one OU. We thought of possible 4 solutions: Setting up separate OU's in ou production env. Creating subdomains for our contoso.com domain like test.contoso.com, something.contoso.com and delegating control to the teams (would we need additional DC's or the two that we have already would be enough to hold this? Setting up additional test domain controler that has a trust to our main domain and all teams can use the test domain controler as they please. Setting up single domain controller for every team/project. We're taking in consideration amount of resources needed, security (for example having multiple domain controlers with multiple passwords may lead users to use simpler passwords) and overall best practices for this scenario.

    Read the article

  • Django - Moving database from development to production servers

    - by Garfonzo
    I am working on a Django project with a MySQL backend. I'm curious about the best way to update a production server's database to reflect the changes made on the development server's database? When I develop now, I make some changes to a models.py file, then to a schemamigration using South. Sometimes I do several migrations across several apps within the main project folder before it's ready for the production database. This means that there are several migration files in the app/migrations/ folder created by South. So on the production server, how does one update the database to reflect all the changes made in development, without having any data loss?

    Read the article

  • Reccomendation for tuning 100's of SQL Databases

    - by wayne
    I'm running several SQL servers, each running a few hundred multi-gig databases for customers. They are all setup homogeneously as far as the schemas are concerned, however customer usages of the data differ quite a lot from database to database. What would be the best way to auto-index/profile/tune this large amount of databases? As there are at least 600 or more catalogs I cant have someone manually profile, and index as required by each databases usage patterns. I'm currently running SQL 2005 but will be moving to 2008, so solutions that work with either are fine.

    Read the article

  • Backup all plesk MySQL Databases to individual files

    - by Michael
    Hy, Because I'm new to shell scripting I need a hand. I currently backup all mydatabases to a single file, thing that makes the restore preaty hard. The second problem that my MySQL password dosen't work because of a Plesk bug and i get the password from "/etc/psa/.psa.shadow". Here is the code that I use to backup all my databases to a single file. mysqldump -uadmin -p`cat /etc/psa/.psa.shadow` --all-databases | bzip2 -c > /root/21.10.2013.sql.bz2 I found some scripts on the web that backup each database to individual files but I don't know how to make them work for my situation. Here is a example script: for db in $(mysql -e 'show databases' -s --skip-column-names); do mysqldump $db | gzip > "/backups/mysqldump-$(hostname)-$db-$(date +%Y-%m-%d-%H.%M.%S).gz"; done Can someone help me make the script above work for my situation? Requirements: Backup each database to individual file using plesk password location.

    Read the article

  • Remove MySQL ibdata1 without dumping and restoring existing proper databases

    - by Halfgaar
    My MySQL server contains two 100+ GB big databases. One was created with innodb_file_per_table and one wasn't. The one that wasn't, has been dumped, ready to be reloaded. However, the ibdata1 file is still huge and I don't have enough free space. Normal advice in this situation is to dump and remove each database, stop MySQL, then remove ibdata1 and the transaction logs, and then reload the databases. My specific question is: can I leave databases that were created with innodb_file_per_table alone? Or will they be destroyed when I remove ibdata1, even though all their files are separate? I can't afford to take this database off-line to dump and reload it. And because it's already properly made with separate files per table, it would feel pretty useless.

    Read the article

  • Resolving TFS_SCHEMA_VERSION Errors In Team Foundation Server 2010 Collection Databases

    - by Jeff Ferguson
    I recently backed up a Team Foundation Server 2010 project collection database and restored it onto another server. All of that went well, until I tried to use the restored database on the new server. As it turns out, the old server was running the Release Candidate of TFS 2010 and the new server is running the RTM version of TFS 2010. I ended up with an error message shown on the new server's Team Web Access site about the project collection's TFS_SCHEMA_VERSION property not containing the appropriate value. As it turns out, TFS_SCHEMA_VERSION is an extended property on the project collection database. I ran the following SQL script against the project collection database restored onto the new server: EXEC [Tfs_DefaultCollection].sys.sp_dropextendedproperty @name=N'TFS_PRODUCT_VERSION' GO EXEC [Tfs_DefaultCollection].sys.sp_addextendedproperty @name=N'TFS_PRODUCT_VERSION', @value=N'10.0.30319.1' GO EXEC [Tfs_DefaultCollection].sys.sp_dropextendedproperty @name=N'TFS_SCHEMA_VERSION' GO EXEC [Tfs_DefaultCollection].sys.sp_addextendedproperty @name=N'TFS_SCHEMA_VERSION', @value=N'Microsoft Team Foundation Server 2010 (RTM)' GO Now, all is well. I can now navigate to http://newserver:8080/tfs/ and see the restored project collection and its contents.

    Read the article

  • Don’t learn SSDT, learn about your databases instead

    - by jamiet
    Last Thursday I presented my session “Introduction to SSDT” at the SQL Supper event held at the offices of 7 Digital (loved the samosas, guys). I did my usual spiel, tour of the IDE, connected development, declarative database development yadda yadda yadda… and at the end asked if there were any questions. One gentleman in attendance (sorry, can’t remember your name) raised his hand and stated that by attempting to evangelise all of the features I’d missed the single biggest benefit of SSDT, that it can tell you stuff about database that you didn’t already know. I realised that he was dead right. SSDT allows you to import your whole database schema into a new project and it will instantly give you a list of errors and/or warnings pertaining to the objects in your database. Invalid references (e.g a long-forgotten stored procedure that refers to a non-existent column), unnecessary 3-part naming, incorrect case usage, syntax errors…it’ll tell you about all of ‘em! Turn on static code analysis (this article shows you how) and you’ll learn even more such as any stored procedures that begin with “sp_”, WHERE clauses that will kill performance, use of @@IDENTITY instead of SCOPE_IDENTITY(), use of deprecated syntax, implicit casts etc…. the list goes on and on. I urge you to download and install SSDT (takes a few minutes, its free and you don’t need SQL Server or Visual Studio pre-installed), start a new project: right-click on your new project and import from your database: and see what happens: You may be surprised what you discover. Let me know in the comments below what results you get, total number of objects, number of errors/warnings, I’d be interested to know! @Jamiet

    Read the article

  • Leveraging Logical Standby Databases in Oracle 11g Data Guard

    Oracle Data Guard still offers support for the venerable logical standby database in Oracle Database 11g. This article, investigates how data warehouse and data mart environments can effectively leverage logical standby database features, but simultaneously provide a final destination when failover from a primary database is mandated during disaster recovery.

    Read the article

  • Open Source NoSQL Databases Ramp Up

    <b>Database Journal:</b> "For most of the Web era, Relational Database Management Systems (RDBMS) based on SQL have dominated the database landscape. But over the course of the last year, a new approach has begun to take hold known as NoSQL, offering an alternative to the traditional RDBMS."

    Read the article

  • Best practices when creating/modeling databases?

    - by Oscar Mederos
    I learned at the University some steps to model a database: Model the problem using the Extended Entity-Relationship Model. Extract the functional dependencies Apply some algorithms to normalize the database (3NF or Boyce-Codd) Create the database I'm studying Computer Science and since I received that course I'm wondering if I always need to do those steps when creating a complex database for an specified problem. For example, do PHP / .NET / .. programmers always do that? or there are some tools to simplify that process, maybe using another way of represent the problem instead of the EERM?

    Read the article

  • How many production servers should I start with?

    - by elfintar
    I'm building a site that I plan to grow to the size of SO. I'm only planning to have one prodcuction server to start off with. This will host everything including the database. I know it's very hard to say but am I likely to run into trouble quickly (if the site takes off) and if this is the case should I start out with more than one server so I can load balance everything from day 1? If no, should I be looking for something a little bigger than this spec?: http://www.123-reg.co.uk/dedicated-server-hosting/

    Read the article

  • Best practices when creating/modeling databases?

    - by Oscar Mederos
    Hello, I learned at the University some steps to model a database: Model the problem using the Extended Entity-Relationship Model. Extract the functional dependencies Apply some algorithms to normalize the database (3NF or Boyce-Codd) Create the database I'm studying Computer Science and since I received that course I'm wondering if I always need to do those steps when creating a complex database for an specified problem. For example, do PHP / .NET / .. programmers always do that? or there are some tools to simplify that process, maybe using another way of represent the problem instead of the EERM?

    Read the article

  • Mobile game production workflow using Html5 and visual studio

    - by Mihalis Bagos
    I want to know of any framework, that lets you build/test applications inside Visual Studio using Html5/JS. We need to be able to have an emulator (like the one on android sdk) for as many devices as possible, and we need to be able to run the application with as few steps as possible (using the "RUN" command in visual studio is no1 choice). Also, this extends to build and deployment to app stores. Is there a way to circumvent the cloud services and build locally? I am at a loss of the plethora of tools and technologies available for game design using Html5. However, I really don't like the way implementations try to get you to rely on their cloud services, so services like appmobi are at the bottom of the favored list.

    Read the article

  • Generic Repository with SQLite and SQL Compact Databases

    - by Andrew Petersen
    I am creating a project that has a mobile app (Xamarin.Android) using a SQLite database and a WPF application (Code First Entity Framework 5) using a SQL Compact database. This project will even eventually have a SQL Server database as well. Because of this I am trying to create a generic repository, so that I can pass in the correct context depending on which application is making the request. The issue I ran into is my DataContext for the SQL Compact database inherits from DbContext and the SQLite database inherits from SQLiteConnection. What is the best way to make this generic, so that it doesn't matter what kind of database is on the back end? This is what I have tried so far on the SQL Compact side: public interface IRepository<TEntity> { TEntity Add(TEntity entity); } public class Repository<TEntity, TContext> : IRepository<TEntity>, IDisposable where TEntity : class where TContext : DbContext { private readonly TContext _context; public Repository(DbContext dbContext) { _context = dbContext as TContext; } public virtual TEntity Add(TEntity entity) { return _context.Set<TEntity>().Add(entity); } } And on the SQLite side: public class ElverDatabase : SQLiteConnection { static readonly object Locker = new object(); public ElverDatabase(string path) : base(path) { CreateTable<Ticket>(); } public int Add<T>(T item) where T : IBusinessEntity { lock (Locker) { return Insert(item); } } }

    Read the article

  • Columnar Databases

    - by jchang
    Ingres just published a TPC-H benchmark for VectorWise , an analytic database technology employing 1) SIMD processing (Intel SSE 4.2), 2) better memory optimizations to leverage on-chip cache, 3) compression, 4) Column-based storage. Ingres originated as a research project at UC Berkeley (see Wikipedia ) in the 1970s, and has since become a commercially supported, open source database system. Apparently, Ingres project people later founded Sybase. So Ingres in a sense, is the grandfather (or perhap...(read more)

    Read the article

  • Node.JS testing with Jasmine, databases, and pre-existing code

    - by Jim Rubenstein
    I've recently built the start of a core system which is likely going turn into a monster product. I'm building the system with node.js, and decided after I got a small base built, that It'd be a great idea to start using some sort of automated test suite to test the application. I decided to use jasmine, as it seems pretty solid and has a lot of features for stubbing spying and mocking methods and classes. The application has a lot of external data stores and api access (kestrel, mysql, mongodb, facebook, and more). My issue is, I've got a good amount of code written that I want to start testing - as it represents the underpinnings of the application. What are the best practices for testing methods/classes that access external APIs that I may or may not have control over? As an example, I have a data structure that fetches a bunch of data from a MySQL database. I want to test the method that retrieves the data; and I'm not sure how to go about it. I could test the fetch method which is supposed to return an array of objects, but to isolate the method from the database, I need to define my own fixture data. So what I end up doing is stubbing the mysql execution, and returning a static dataset. So, I end up writing a function that returns the dataset that makes my test pass. That doesn't seem to actually test the code, other than verifying a method is being called. I know this is kind of abstract and vague, it seems that the idea of testing is very much abstract though, so hopefully someone has some experience and can guide me in the right direction. Any advice, or reading I can do is more than welcomed. Thanks in advance.

    Read the article

  • Actually utilizing relational databases for entity systems

    - by Marc Müller
    Recently I was researching several entity systems and obviously I came across T=Machine's fantastic articles on the subject. In Part 5 of the series the author uses a relational schema to explain how an entity system is built and works. Since reading this, I have been wondering whether or not actually using a compact SQL library would be fast enough for real-time usage in video games. Performance seems to be the main issue with a full blown SQL database for management of all entities and components. However, as mentioned in T=Machine's post, basically all access to data inside the SQLDB is done sequentlially by each system over each component. Additionally, using a library like SQLite, one could easily improve performance by storing the entity data exclusively in RAM to increase access speeds. Disregarding possible performance issues, using a SQL database, in my opinion, would allow for a very intuitive implementation of entity systems and bring a long certain other benefits like easy de/serialization of game states and consistency checks like the uniqueness of entity IDs. Edit for clarification: The main question was whether using a SQL database for the actual entity management (not just storing the game state on the disk) in a real-time game would still yield a framerate appropriate for a game or even if someone is aware of projects that demonstrate SQL in a video game.

    Read the article

  • How to get experience in large scale databases?

    - by Justin
    I have written applications that are very small scale and the code I write works fine for them. But I have often wondered how the server side code I write would scale up from 100s of queries per day to millions. Also when looking at possible jobs/projects, people are often looking for developers with experience in this sort of high traffic database design so I would at least like to be able to say, I havent gotten to work on a project that was this popular, but I at least have tried to simulate it. Are there tools or frameworks that can generate a lot of traffic or at least simulate what would happen with traffic on different orders of magnitude so I could get some practice writing optimized code for higher traffic applicaitons?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >