Search Results

Search found 31945 results on 1278 pages for 'database comparison'.

Page 185/1278 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • Chapter 7–Enforced Data Protection

    - by drsql
    As the book progresses, I find myself veering from the original stated outline quite a bit, because as I teach about this more (and I am teaching a daylong db design class in August at http://www.sqlsolstice.com/ … shameless plug, but it is on topic :) I start to find that a given order works better. Originally I had slated myself to talk more about modeling here for three chapters, then get back to the more implementation topics to finish out the book, but now I am going to keep plugging through...(read more)

    Read the article

  • Speaking - SQL Saturday 173, Washington DC

    - by AllenMWhite
    After a great time at the PASS Summit in Seattle I'll be once again presenting on PowerShell for SQL Server at SQL Saturday #173 in Chevy Chase, Maryland. On Friday, December 7 I'll be presenting my full day session Automate and Manage SQL Server with PowerShell . Here's the abstract: This soup-to-nuts all day session will first introduce you to PowerShell, after which you'll learn the basic SMO object model, how to manipulate data with PowerShell and how to use SMO to manage objects. We'll then...(read more)

    Read the article

  • Database for heat tolerances of various cables?

    - by I. F.
    Is there any kind of unified database of heat tolerances for networking cables? I've been setting up a number of home/small office networks lately and as a mostly-amateur I could really use some information on what is safe to run behind a radiator, next to a steam pipe, etc. The question I'm up against at the moment is: Can I run normal RJ11 phone line cable (from DSL modem to phone jack) behind a steam radiator without risking a fire? Unlike cat5, I could not find published standards for these, so I'm turning to experts with more experience. This is a cut-rate show. Do I go out and buy more cabling, and if so which, or use the spare that I have?

    Read the article

  • Scanning the Error Log with PowerShell

    - by AllenMWhite
    One of the most important things you can do as a DBA is to keep tabs on the errors reported in the error log, but there's a lot of information there and sometimes it's hard to find the 'good stuff'. You can open the errorlog file directly in a text editor and search for errors but that gets tedious, and string searches generally return just the lines with the error message numbers, and in the error log the real information you want is in the line after that. PowerShell 2.0 introduced a new cmdlet...(read more)

    Read the article

  • Fewer SQL Developers needed?

    - by Mercfh
    According to Tiobe, http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html (not exactly reliable but still) and just noticing around here. I see less talk about SQL in general? Has there been a slump in web development that uses databases like Mysql, or Data Warehousing here recently? Or have alternate solutions like NoSQL/CouchDB/MongoDB started to take over or what? or have I just been missing something?

    Read the article

  • MySQL with mutiple threads and processes

    - by Abhan
    I'm developing a telecom messaging platform in C, and I'm going to need multiple processes to be working with MySQL DB. How can I make two processes read/write to/from a Mysql DB and, if/when one of them goes down, get the other to seamlessly take over the work until the dead process gets back to work? I was thinking/googling some options and am stuck in place where I don't know which one to choose. What I think so far is that table lock is not the best option to go for, as it will stall the other process until the table is unlocked. The other option is to use row-level locks or manual locks, but I can't find the best way to do it.

    Read the article

  • Search multiple tables

    - by gilden
    I have developed a web application that is used mainly for archiving all sorts of textual material (documents, references to articles, books, magazines etc.). There can be any given number of archive tables in my system, each with its own schema. The schema can be changed by a moderator through the application (imagine something similar to a really dumbed down version of phpMyAdmin). Users can search for anything from all of the tables. By using FULLTEXT indexes together with substring searching (fields which do not support FULLTEXT indexing) the script inserts the results of a search to a single table and by ordering these results by the similarity measure I can fairly easily return the paginated results. However, this approach has a few problems: substring searching can only count exact results the 50% rule applies to all tables separately and thus, mysql may not return important matches or too naively discards common words. is quite expensive in terms of query numbers and execution time (not an issue right now as there's not a lot of data yet in the tables). normalized data is not even searched for (I have different tables for categories, languages and file attatchments). My planned solution Create a single table having columns similar to id, table_id, row_id, data Every time a new row is created/modified/deleted in any of the data tables this central table also gets updated with the data column containing a concatenation of all the fields in a row. I could then create a single index for Sphinx and use it for doing searches instead. Are there any more efficient solutions or best practises how to approach this? Thanks.

    Read the article

  • Cron job failing to backing up a Postgres database

    - by user705142
    I'm unsure what's going on here: I've got a backup script which runs fine under root. It produces a 300kb database dump in the proper directory. When running it as a cron job with exactly the same command however, an empty gzip file appears with nothing in it. The cron log shows no error, just that the command has been run. This is the script: #! /bin/bash DIR="/opt/backup" YMD=$(date "+%Y-%m-%d") su -c "pg_dump -U postgres mydatabasename | gzip -6 > "$DIR/database_backup.$YMD.gz" " postgres # delete backup files older than 60 days OLD=$(find $DIR -type d -mtime +60) if [ -n "$OLD" ] ; then echo deleting old backup files: $OLD echo $OLD | xargs rm -rfv fi And the cron job: 01 10 * * * root sh /opt/daily_backup_script.sh It produces a database_backup file, just an empty one. Anyone know what's going on here?

    Read the article

  • Foreign key restrictions -> yes or no?

    - by This is it
    I would like to hear some”real life experience” suggestions if foreign key restrictions are good or bad thing to enforce in DB. I would kindly ask students/beginners to refrain from jumping and answering quickly and without thinking. At the beginning of my career I thought that stupidest thing you can do is disregard the referential integrity. Today, after "few" projects I'm thinking different. Quite different. What do you think: Should we enforce foreign key restrictions or not? *Please explain your answer.

    Read the article

  • An algorithm for finding subset matching criteria?

    - by Macin
    I recently came up with a problem which I would like to share some thoughts about with someone on this forum. This relates to finding a subset. In reality it is more complicated, but I tried to present it here using some simpler concepts. To make things easier, I created this conceptual DB model: Let's assume this is a DB for storing recipes. Recipe can have many instructions steps and many ingredients. Ingredients are stored in a cupboard and we know how much of each ingredient we have. Now, when we create a recipe, we have to define how much of each ingredient we need. When we want to use a recipe, we would just check if required amount is less than available amount for each product and then decide if we can cook a dinner - if amount required for at least one ingredient is less than available amount - recipe cannot be cooked. Simple sql query to get the result. This is straightforward, but I'm wondering, how should I work when the problem is stated the other way round, i.e. how to find recipies which can be cooked only from ingredients that are available? I hope my explanation is clear, but if you need any more clarification, please ask.

    Read the article

  • Multitenancy in SQL Azure

    - by cibrax
    If you are building a SaaS application in Windows Azure that relies on SQL Azure, it’s probably that you will need to support multiple tenants at database level. This is short overview of the different approaches you can use for support that scenario, A different database per tenant A new database is created and assigned when a tenant is provisioned. Pros Complete isolation between tenants. All the data for a tenant lives in a database only he can access. Cons It’s not cost effective. SQL Azure databases are not cheap, and the minimum size for a database is 1GB.  You might be paying for storage that you don’t really use. A different connection pool is required per database. Updates must be replicated across all the databases You need multiple backup strategies across all the databases Multiple schemas in a database shared by all the tenants A single database is shared among all the tenants, but every tenant is assigned to a different schema and database user. Pros You only pay for a single database. Data is isolated at database level. If the credentials for one tenant is compromised, the rest of the data for the other tenants is not. Cons You need to replicate all the database objects in every schema, so the number of objects can increase indefinitely. Updates must be replicated across all the schemas. The connection pool for the database must maintain a different connection per tenant (or set of credentials) A different user is required per tenant, which is stored at server level. You have to backup that user independently. Centralizing the database access with store procedures in a database shared by all the tenants A single database is shared among all the tenants, but nobody can read the data directly from the tables. All the data operations are performed through store procedures that centralize the access to the tenant data. The store procedures contain some logic to map the database user to an specific tenant. Pros You only pay for a single database. You only have a set of objects to maintain and backup. Cons There is no real isolation. All the data for the different tenants is shared in the same tables. You can not use traditional ORM like EF code first for consuming the data. A different user is required per tenant, which is stored at server level. You have to backup that user independently. SQL Federations A single database is shared among all the tenants, but a different federation is used per tenant. A federation in few words, it’s a mechanism for horizontal scaling in SQL Azure, which basically uses the idea of logical partitions to distribute data based on certain criteria. Pros You only have a single database with multiple federations. You can use filtering in the connections to pick the right federation, so any ORM could be used to consume the data. Cons There is no real isolation at that database level. The isolation is enforced programmatically with federations.

    Read the article

  • How to choose NoSQL database engine?

    - by Poma
    We have a database with following specs: 30k records, 7mb in size 20 inserts/second 1000 updates/second 1000 range selects/second, by secondary index, approx 10 rows each needs at least one secondary index needs some mechanism to expire keys if they are not updated for 75 secs (can be done via programmatic garbage collector but will require additional 'last_update' index and will add some load) consistency is not required durability is not required db should be stored in memory For now we use Redis, but it does not have secondary index and it's keys index:foo:* is too slow. Membase also does not have secondary index (as far as I know). MongoDB and MySQL memory engine have table-level locks. What engine will fit our use case?

    Read the article

  • Search For a Query in RDL Files with PowerShell

    - by AllenMWhite
    In tracking down poorly performing queries for clients I often encounter the query text in a trace file I've captured, but don't know the source of the query. I've found that many of the poorest performing queries are those written into the reports the business users need to make their decisions. If I can't figure out where they came from, usually years after the queries were written, I can't fix them. First thing I did was find a great utility called RSScripter , which opens up a Windows dialog...(read more)

    Read the article

  • [Speaking] PowerShell at the PASS Summit

    - by AllenMWhite
    Next week is the annual PASS Summit , the event of the year for those of us in the SQL Server community. We get to see our old friends, make new friends, and learn an amazing amount about SQL Server, and it'll be in Seattle, so it's close to the mother ship. I love having Microsoft close, because it's easier to get to know the people who actually make this amazing product we spend our lives working with. This year I'm fortunate to have been selected to present three sessions. One is a regular session...(read more)

    Read the article

  • No Significant Fragmentation? Look Closer…

    If you are relying on using 'best-practice' percentage-based thresholds when you are creating an index maintenance plan for a SQL Server that checks the fragmentation in your pages, you may miss occasional 'edge' conditions on larger tables that will cause severe degradation in performance. It is worth being aware of patterns of data access in particular tables when judging the best threshold figure to use.

    Read the article

  • Binding MySQL to run from the public or private LAN IP address - which one is faster

    - by Lamin Barrow
    So we have 2 servers all running at the same web host. We have bind MySQL to listen on the public ip-address of the database server and the web server connects to it from the public ip. Both servers run on the same private network. Currently, the DB connect method from our php script takes about 3ms to connect to the MySQL database server host. My question is, would MySql data interaction from the web server be faster if we bind it to listen on the private lan address on the database server instead of the public IP? or is it the same regardless and it wont make a different.

    Read the article

  • PHP - Data Access Layer

    - by scarpacci
    I am currently reviewing a code base and noticed that a majority of the calls (along with DB connections) are just buried inside the PHP scripts. I would have assumed that like other languages they would have developed some sort of data access layer (Like I would do in .Net or Java) for all of the communication to the DB (or implemented MVC, etc). Is this still a common pattern in PHP or is there alternative methodologies/patterns for this technology? I am just trying to understand why the subs would have developed it this way. Any insight/info on how experienced developers design an approach data access in PHP would be very much appreciated.

    Read the article

  • T-SQL Snack: How Much Free Storage Space is Available?

    - by andyleonard
    Introduction Ever have a need to calculate the total available storage space for a server? Recently I did. Here's a solution I came up with - I bet someone can do this better! xp_fixeddrives There's a handy stored procedure called xp_fixeddrives that reports the available storage space: exec xp_fixeddrives This returns: drive MB free ----- ----------- C 6998 E 201066 Problem solved right? Maybe. The Sum What I really want is the sum total of all available space presented to the server. I built this...(read more)

    Read the article

  • How to manage the images for my Desktop Application

    - by NonExistent
    What's the better way to manage the image files of my app? i've been thinking about the way that i do right now (save the image as a BLOB IN db), and i ask myself if would be better to manage the image as text in my DB, i mean, convert the image to hex(length of 500), then save in the db as text, and when calling it convert it from hex to image, or something like that, but what do you consider as an Experienced Progammer that is the better way? Maybe the question is too broad, but i need to know that, and nobody answers me anywhere...

    Read the article

  • Why creating a new MDX language instead of extending SQL?

    - by DReispt
    I have a long experience with SQL, but recently began working with datawarehouse and OLAP technologies: building fact and dimension tables, that then are queried using MDX (MultiDimensional eXpressions). The problem is that MDX works with a completely different logic compared to SQL, and it's a whole new learning curve even for someone with a strong SQL background. Yes, MDX allows you to do things that would be hard or almost impossible with plain SQL. But sometimes it's frustrating to be hours around an MDX to do something you know you could achieve in minutes using SQL (ok, you can tell me to RTFM ...). But why go on to the trouble of creating a new completely different language when you could build on SQL, extend it to add the features needed by OLAP applications?

    Read the article

  • Is 10% too much for autogrow on a 4 GB sql server DB?

    - by ntsue
    I am getting the following error: 2011-03-07 21:59:35.73 spid64 Autogrow of file 'MYDB_DATA' in database 'MYDB' was cancelled by user or timed out after 16078 milliseconds. Use ALTER DATABASE to set a smaller FILEGROWTH value for this file or to explicitly set a new file size. I did some research, and I found that for large databases you should set autogrow to a fixed size (MB), and not to a percentage. I feel like this database is not large and I may not be addressing the correct issue by changing this value. Does anyone have any opinions? Thank you! EDIT: I should have specified SQL Server 2008 RC2 running on Windows Server 2008

    Read the article

  • Chapters Two, Three, and Four

    - by drsql
    I am trying to blog all of the chapters of the book, but due to deadlines and a lot of shuffling about, I never got around it for these three chapters, two of which I have added since I wrote the original table of contents. All of these contain mostly material from previous editions of the book, updated a good amount, but nothing tremendously different if you had memorized the material from the previous edition. As such, the pre-writing blog ritual wasn’t as necessary (for me at least) as it is going...(read more)

    Read the article

  • Full Text Search Strategy For My Website

    - by Hosea146
    I have a website that allows users to search for items in various categories. Each category is a separate area (page) of my website. For example, some categories might be cars, bikes, books etc. At the moment a user has to search for an item by going to the page (for example, cars) and searching for the car they want. I would like to allow the user to search for anything on my site, from my main home page. At the moment, each page (category) has its own set of tables, and I don't really want to turn Full Text Search on for each table (20+ of them) and search each table individually when a search is done. This is going to be slow and tedious. What I'm thinking of doing is creating a single table that will hold all searchable information for each category of item (when an item is saved in its respective table, I would copy all searchable information over to my 'Search' table). I would then turn Full Text Search on for that table, and search that table. Does this sound reasonable? Is there a better way? I've never used Full Text Search before, so this is new to me.

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >