Search Results

Search found 33242 results on 1330 pages for 'database optimization'.

Page 161/1330 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • TSQL Challenge 28 - SELECT TOP N articles from each category from a SQL Server 2000 database

    The challenge is to write a query that returns the articles to be displayed in the home page of the website. N number of articles from each category is to be selected where N is configured in the Categories table. Each category should select the most recent N articles. ArticleID can be used to identify the most recent articles. An article with a higher number indicates a more recent article.

    Read the article

  • VS for Database Pros (GDR R2) Removes Sproc Comments (2 replies)

    I have been working with my team to implement Data Dude GDR R2 for managing ALL of the databases for our applications. So far I am very pleased by what we can do with the tool with a single exception. I want to have a header with comments as part of every stored procedure so we can track the history of a procedure. When creating a deployment script, and subsequently running it, Data Dude strips ou...

    Read the article

  • SQLAuthority News – SQL Server 2012 Upgrade Technical Guide – A Comprehensive Whitepaper – (454 pages – 9 MB)

    - by pinaldave
    Microsoft has just released SQL Server 2012 Upgrade Technical Guide. This guide is very comprehensive and covers the subject of upgrade in-depth. This is indeed a helpful detailed white paper. Even writing a summary of this white paper would take over 100 pages. This further proves that SQL Server 2012 is quite an important release from Microsoft. This white paper discusses how to upgrade from SQL Server 2008/R2 to SQL Server 2012. I love how it starts with the most interesting and basic discussion of upgrade strategies: 1) In-place upgrades, 2) Side by side upgrade, 3) One-server, and 4) Two-server. This whitepaper is not just pure theory but is also an excellent source for some tips and tricks. Here is an example of a good tip from the paper: “If you want to upgrade just one database from a legacy instance of SQL Server and not upgrade the other databases on the server, use the side-by-side upgrade method instead of the in-place method.” There are so many trivia, tips and tricks that make creating the list seems humanly impossible given a short period of time. My friend Vinod Kumar, an SQL Server expert, wrote a very interesting article on SQL Server 2012 Upgrade before. In that article, Vinod addressed the most interesting and practical questions related to upgrades. He started with the fundamentals of how to start backup before upgrade and ended with fail-safe strategies after the upgrade is over. He covered end-to-end concepts in his blog posts in simple words in extremely precise statements. A successful upgrade uses a cycle of: planning, document process, testing, refine process, testing, planning upgrade window, execution, verifying of upgrade and opening for business. If you are at Vinod’s blog post, I suggest you go all the way down and collect the gold mine of most important links. I have bookmarked the blog by blogging about it and I suggest that you bookmark it as well with the way you prefer. Vinod Kumar’s blog post on SQL Server 2012 Upgrade Technical Guide SQL Server 2012 Upgrade Technical Guide is a detailed resource that’s also available online for free. Each chapter was carefully crafted and explained in detail. Here is a quick list of the chapters included in the whitepaper. Before downloading the guide, beware of its size of 9 MB and 454 pages. Here’s the list of chapters: Chapter 1: Upgrade Planning and Deployment Chapter 2: Management Tools Chapter 3: Relational Databases Chapter 4: High Availability Chapter 5: Database Security Chapter 6: Full-Text Search Chapter 7: Service Broker Chapter 8: SQL Server Express Chapter 9: SQL Server Data Tools Chapter 10: Transact-SQL Queries Chapter 11: Spatial Data Chapter 12: XML and XQuery Chapter 13: CLR Chapter 14: SQL Server Management Objects Chapter 15: Business Intelligence Tools Chapter 16: Analysis Services Chapter 17: Integration Services Chapter 18: Reporting Services Chapter 19: Data Mining Chapter 20: Other Microsoft Applications and Platforms Appendix 1: Version and Edition Upgrade Paths Appendix 2: SQL Server 2012: Upgrade Planning Checklist Download SQL Server 2012 Upgrade Technical Guide [454 pages and 9 MB] Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, DBA, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • SQL query optimization

    - by nvtthang
    I have a problem with my SQL query that take time to get all records from database. Any body help me. Below is a sample of database: order(order_id, order_nm) customer(customer_id, customer_nm) orderDetail(orderDetail_id, order_id, orderDate, customer_id, Comment) I want to get latest customer and order detail information. Here is may solution: I've created a function that GetLatestOrderByCustomer(CusID) to get lastest Customer information. CREATE FUNCTION [dbo].[GetLatestOrderByCustomer] ( @cus_id int ) RETURNS varchar(255) AS BEGIN DECLARE @ResultVar varchar(255) SELECT @ResultVar = tmp.comment FROM ( SELECT TOP 1 orderDate, comment FROM orderDetail WHERE orderDetail.customer_id = @cust_id ) tmp -- Return the result of the function RETURN @ResultVar END Below is my SQL query SELECT customer.customer_id , customer.customer_nm , dbo.GetLatestOrderByCustomer(customer.customer_id) FROM Customer LEFT JOIN orderDetail ON orderDetail.customer_id = customer.customer_id It's take time to run the function. Could anybody suggest me any solutions to make it better? Thanks in advance.

    Read the article

  • Oracle Warehouse Builder 11gR2 Windows-ra is

    - by Fekete Zoltán
    A héten megjelent az Oracle Database 11g Release 2 Windows platformra is, így lett teljes a kép a legfontosabb szerver operációs rendszerek körében, ezáltal az OWB kliens is hozzáférheto lett Windows-on. Az OWB az Oracle piacvezeto ETL eszköze, extraction, transformation, load - adatkinyerés, betöltés és átalakítás. Az Oracle Warehouse Builder Java-s kliens programja eddig is elérheto volt Linuxon, most már supportáltan megvan Windows-ra is (kis hegesztéssel eddig is lehetett a Linux-os Java-s változatot használni Windows-on). Az OWB vindózos kliens kétféle módon érheto el: - a Database 11gR2 Windows install készlet telepítésével automatikusan felkerül, letöltés - önállóan is felrakható más gépre (standalon), letöltés, itt a Linux kliens is megtalálható. Ez a standalone verzió most jelent meg az OTN-en 2-3 órája. :)

    Read the article

  • Oracle Technology Fórum május 5-én

    - by Lajos Sárecz
    Május 11-én rendezzük a tavaszi Oracle Technology Fórumot, melyen 3 szekcióban fogjuk bemutatni az Oracle technológiai újdonságokat. A félnapos rendezvény témái szkeciónként az alábbiak lesznek: Management Track: - Üzemeltetés Oracle Enterprise Manager-rel az alkalmazástól a háttértárig - Az Oracle hackelés mítosza - Változtasson kockázatok nélkül Architecture Track - Adatbázis a felhoben - Extrém nagy teljesítményu adattárházak és tranzakciós rendszerek - Oracle Maximális rendelkezésre állású architektúra Development Track - Élet a Forms után - lehetoségek, megoldások, ajánlott irányok - ADF üzleti folyamatokban, integrációs környezetben - Tartalomkezelés beágyazása ADF fejlesztési környezetbe - Oracle UCM integráció Illetve lesz két egymással párhuzamosan futó keynote eloadás a nap elején: - IT költségek csökkentése - A megkerülhetetlen ADF - Átfogó és egységes Oracle fejlesztési keretrendszer Mint látható, a rendezvény fókuszában az Oracle Database 11gR2, valamint az Oracle fejleszto eszközök lesznek. Szó fog esni a Sun Oracle Database Machine-rol és az Oracle Cloud Computing stratégiájáról is. Szeretettel várunk mindenkit, aki valamilyen szinten foglalkozik Oracle adatbázis-kezelovel és Oracle fejleszto eszközökkel. A regisztráció már elindult.

    Read the article

  • Take Steps to Mitigate the Threat of Insiders

    - by Troy Kitch
    Register now for our upcoming Feb 23 Webcast The Insider Threat, Understand and Mitigate Your Risks. Insiders, by virtue of legitimate access to their organizations' information and IT infrastructure, pose a significant risk to employers. Employees, motivated by financial problems, greed, revenge, the desire to obtain a business advantage, or the wish to impress a new employer, have stolen confidential data, proprietary information, or intellectual property from their employers. Since this data typically resides in databases, organizations need to consider a database security defense in depth approach that takes into account preventive and detective controls to protect their data against abuse by insiders. Register now and learn about: Actual cases of insider cyber crimes Three primary types of insider cyber crimes: IT sabotage, theft of intellectual property (e.g. trade secrets), and employee fraud Lack of controls around data that allow these crimes to be successful Solutions to help secure data and database infrastructure

    Read the article

  • OurSQL: The MySQL Database Community Podcast

    - by bertrand.matthelie(at)oracle.com
    @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }div.Section1 { page: Section1; } For those of you not aware of it, Sheeri K. Cabral and Sarah Novotny are doing a great job running the "OurSQL" Podcast. A great and convenient way to learn more about various MySQL topics. @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } Episode 33 is about "Looking through the Lenz"...that is, Lenz Grimmer, MySQL Community Manager at Oracle and long time MySQLer.   Lenz talks about snapshot backups in general, MySQL backups with snapshots, and mylvmbackup, a script he wrote and maintains to easily take consistent MySQL snapshot backups. Check it out!   Keep up the good work, Sheeri and Sarah!

    Read the article

  • Optimizing MySQL for small VPS

    - by Chris M
    I'm trying to optimize my MySQL config for a verrry small VPS. The VPS is also running NGINX/PHP-FPM and Magento; all with a limit of 250MB of RAM. This is an output of MySQL Tuner... -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.41-3ubuntu12.8 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 1M (Tables: 14) [--] Data in InnoDB tables: 29M (Tables: 301) [--] Data in MEMORY tables: 1M (Tables: 17) [!!] Total fragmented tables: 301 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 2d 11h 14m 58s (1M q [8.038 qps], 33K conn, TX: 2B, RX: 618M) [--] Reads / Writes: 83% / 17% [--] Total buffers: 122.0M global + 8.6M per thread (100 max threads) [!!] Maximum possible memory usage: 978.2M (404% of installed RAM) [OK] Slow queries: 0% (37/1M) [OK] Highest usage of available connections: 6% (6/100) [OK] Key buffer size / total MyISAM indexes: 32.0M/282.0K [OK] Key buffer hit rate: 99.7% (358K cached / 1K reads) [OK] Query cache efficiency: 83.4% (1M cached / 1M selects) [!!] Query cache prunes per day: 48301 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 144K sorts) [OK] Temporary tables created on disk: 13% (27K on disk / 203K total) [OK] Thread cache hit rate: 99% (6 created / 33K connections) [!!] Table cache hit rate: 0% (32 open / 51K opened) [OK] Open file limit used: 1% (20/1K) [OK] Table locks acquired immediately: 99% (1M immediate / 1M locks) [!!] InnoDB data size / buffer pool: 29.2M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 64M) table_cache (> 32) innodb_buffer_pool_size (>= 29M) and this is the config. # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 32M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 sort_buffer_size = 4M read_buffer_size = 4M myisam_sort_buffer_size = 16M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 100 table_cache = 32 tmp_table_size = 128M #thread_concurrency = 10 # # * Query Cache Configuration # #query_cache_limit = 1M query_cache_type = 1 query_cache_size = 64M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ The site contains 1 wordpress site,so lots of MYISAM but mostly static content as its not changing all that often (A wordpress cache plugin deals with this). And the Magento Site which consists of a lot of InnoDB tables, some MyISAM and some INMEMORY. The "read" side seems to be running pretty well with a mass of optimizations I've used on Magento, the NGINX setup and PHP-FPM + XCACHE. I'd love to have a kick in the right direction with the MySQL config so I'm not blindly altering it based on the MySQLTuner without understanding what I'm changing. Thanks

    Read the article

  • VS for Database Pros (GDR R2) Removes Sproc Comments (2 replies)

    I have been working with my team to implement Data Dude GDR R2 for managing ALL of the databases for our applications. So far I am very pleased by what we can do with the tool with a single exception. I want to have a header with comments as part of every stored procedure so we can track the history of a procedure. When creating a deployment script, and subsequently running it, Data Dude strips ou...

    Read the article

  • What's So Smart About Oracle Exadata Smart Flash Cache?

    - by kimberly.billings
    Want to know what's so "smart" about Oracle Exadata Smart Flash Cache? This three minute video explains how Oracle Exadata Smart Flash Cache helps solve the random I/O bottleneck challenge and delivers extreme performance for consolidated database applications. Exadata Smart Flash Cache is a feature of the Sun Oracle Database Machine. With it, you get ten times faster I/O response time and use ten times fewer disks for business applications from Oracle and third-party providers. Read the whitepaper for more information. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • 3rd party data - Store in Data Warehouse or Primary database?

    - by brydgesk
    This is mostly a data warehouse philosophy question. My project involves an Oracle forms application, and a Teradata Data Warehouse for reporting and ad-hoc purposes. In addition to the primary data created by the users of our application, we also require data from various other sources. Currently, this 3rd party data comes via FTPd flat files directly to our Data Warehouse. To access the data, our users must use a series of custom BusinessObjects reports. My question is, would it make more sense for this data to be sent to our source Oracle system instead? Is it ever appropriate for a Data Warehouse to be the point of origin for users to access raw data? In short, is it more important that the operational database contain only the data created by your project, or that the data warehouse remain dedicated solely to reporting and analysis?

    Read the article

  • Increase application performance

    - by Prayos
    I'm writing a program for a company that will generate a daily report for them. All of the data that they use for this report is stored in a local SQLite database. For this report, the utilize pretty much every bit of the information in the database. So currently, when I query the datbase, I retrieve everything, and store the information in lists. Here's what I've got: using (var dataReader = _connection.Select(query)) { if (dataReader.HasRows) { while (dataReader.Read()) { _date.Add(Convert.ToDateTime(dataReader["date"])); _measured.Add(Convert.ToDouble(dataReader["measured_dist"])); _bit.Add(Convert.ToDouble(dataReader["bit_loc"])); _psi.Add(Convert.ToDouble(dataReader["pump_press"])); _time.Add(Convert.ToDateTime(dataReader["timestamp"])); _fob.Add(Convert.ToDouble(dataReader["force_on_bit"])); _torque.Add(Convert.ToDouble(dataReader["torque"])); _rpm.Add(Convert.ToDouble(dataReader["rpm"])); _pumpOneSpm.Add(Convert.ToDouble(dataReader["pump_1_strokes_pm"])); _pumpTwoSpm.Add(Convert.ToDouble(dataReader["pump_2_strokes_pm"])); _pullForce.Add(Convert.ToDouble(dataReader["pull_force"])); _gpm.Add(Convert.ToDouble(dataReader["flow"])); } } } I then utilize these lists for the calculations. Obviously, the more information that is in this database, the longer the initial query will take. I'm curious if there is a way to increase the performance of the query at all? Thanks for any and all help. EDIT One of the report rows is called Daily Drilling Hours. For this calculation, I use this method: // Retrieves the timestamps where measured depth == bit depth and PSI >= 50 public double CalculateDailyProjectDrillingHours(DateTime date) { var dailyTimeStamps = _time.Where((t, i) => _date[i].Equals(date) && _measured[i].Equals(_bit[i]) && _psi[i] >= 50).ToList(); return _dailyDrillingHours = Convert.ToDouble(Math.Round(TimeCalculations(dailyTimeStamps).TotalHours, 2, MidpointRounding.AwayFromZero)); } // Checks that the interval is less than 10, then adds the interval to the total time private static TimeSpan TimeCalculations(IList<DateTime> timeStamps) { var interval = new TimeSpan(0, 0, 10); var totalTime = new TimeSpan(); TimeSpan timeDifference; for (var j = 0; j < timeStamps.Count - 1; j++) { if (timeStamps[j + 1].Subtract(timeStamps[j]) <= interval) { timeDifference = timeStamps[j + 1].Subtract(timeStamps[j]); totalTime = totalTime.Add(timeDifference); } } return totalTime; }

    Read the article

  • Programmers and Database Professionals in Performance Based Companies

    - by swisscheese
    Anybody here work for a company (or know of someone that does) in the fields of programming or anything related to DBs and not have set work hours? Where you are paid for performance rather than how many hours you sit in a chair at the office? Any project / company I have been apart of always has pretty strict primary hours with the "great opportunity" / expectation to stay until the job is done. Is this type of flexibility really feasible in a group environment in these fields? Would pay for performance work within a company in these fields? With having strict primary hours I notice a lot of inefficiencies. Some weeks or days there is only so much that can be done (for whatever the reason may be) and if your work is done it doesn't help moral to force someone to stay for 8 hrs/day or 40hrs/week if the next week they may have to pull a 60+hr work week. I know that a lot of flexibility can come from working independently or as a consultant so this question really does not encompass those types of positions.

    Read the article

  • OleDbExeption Was unhandled in VB.Net

    - by ritch
    Syntax error (missing operator) in query expression '((ProductID = ?) AND ((? = 1 AND Product Name IS NULL) OR (Product Name = ?)) AND ((? = 1 AND Price IS NULL) OR (Price = ?)) AND ((? = 1 AND Quantity IS NULL) OR (Quantity = ?)))'. I need some help sorting this error out in Visual Basics.Net 2008. I am trying to update records in a MS Access Database 2008. I have it being able to update one table but the other table is just not having it. Private Sub Admin_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load 'Reads Users into the program from the text file (Located at Module.VB) ReadUsers() 'Connect To Access 2007 Database File con.ConnectionString = ("Provider=Microsoft.ACE.OLEDB.12.0;" & "Data Source=E:\Computing\Projects\Login\Login\bds.accdb;") con.Open() 'SQL connect 1 sql = "Select * From Clients" da = New OleDb.OleDbDataAdapter(sql, con) da.Fill(ds, "Clients") MaxRows = ds.Tables("Clients").Rows.Count intCounter = -1 'SQL connect 2 sql2 = "Select * From Products" da2 = New OleDb.OleDbDataAdapter(sql2, con) da2.Fill(ds, "Products") MaxRows2 = ds.Tables("Products").Rows.Count intCounter2 = -1 'Show Clients From Database in a ComboBox ComboBoxClients.DisplayMember = "ClientName" ComboBoxClients.ValueMember = "ClientID" ComboBoxClients.DataSource = ds.Tables("Clients") End Sub The button, the error appears on da2.update(ds, "Products") Private Sub Button4_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button4.Click Dim cb2 As New OleDb.OleDbCommandBuilder(da2) ds.Tables("Products").Rows(intCounter2).Item("Price") = ProductPriceBox.Text da2.Update(ds, "Products") 'Alerts the user that the Database has been updated MsgBox("Database Updated") End Sub However the code works on updating another table Private Sub UpdateButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles UpdateButton.Click 'Allows users to update records in the Database Dim cb As New OleDb.OleDbCommandBuilder(da) 'Changes the database contents with the content in the text fields ds.Tables("Clients").Rows(intCounter).Item("ClientName") = ClientNameBox.Text ds.Tables("Clients").Rows(intCounter).Item("ClientID") = ClientIDBox.Text ds.Tables("Clients").Rows(intCounter).Item("ClientAddress") = ClientAddressBox.Text ds.Tables("Clients").Rows(intCounter).Item("ClientTelephoneNumber") = ClientNumberBox.Text 'Updates the table withing the Database da.Update(ds, "Clients") 'Alerts the user that the Database has been updated MsgBox("Database Updated") End Sub

    Read the article

  • SQL Azure maximum database size rises from 10GB to 50GB in June

    - by Eric Nelson
    At Mix we announced that we will be offering a new 50gb size option in June. If you would like to become an early adopter of this new size option before generally available, send an email to [email protected]  and it will auto-reply with instructions to fill out a survey to nominate your application that requires greater than 10gb of storage. Other announcements included: MARS in April: Execute multiple batches in a single connection Spatial Data in June: Geography and geometry types SQL Azure Labs: SQL Azure Labs provides a place where you can access incubations and early preview bits for products and enhancements to SQL Azure. Currently OData Service for SQL Azure. Related Links: SQL Azure Announcements at MIX http://ukazure.ning.com

    Read the article

  • Keeping an enum and a table in sync

    - by MPelletier
    I'm making a program that will post data to a database, and I've run into a pattern that I'm sure is familiar: A short table of most-likely (very strongly likely) fixed values that serve as an enum. So suppose the following table called Status: Status Id Description -------------- 0 Unprocessed 1 Pending 2 Processed 3 Error In my program I need to determine a status Id for another table, or possibly update a record with a new status Id. I could hardcode the status Id's in an enum and hope no one ever changes the database. Or I could pre-fetch the values based on the description (thus hardcoding that instead). What would be the correct approach to keep these two, enum and table, synced?

    Read the article

  • Proactive Database Index Creation

    Indexes help your application find your data quickly and provide users with a well performing application, while minimizing server resources. This article discusses indexing guidelines related to join tables and covering indexes.

    Read the article

  • How do I close a database connection in a WCF service?

    - by Dan
    I have been unable to find any documentation on properly closing database connections in WCF service operations. I have a service that returns a streamed response through the following method. public virtual Message GetData() { string sqlString = BuildSqlString(); SqlConnection conn = Utils.GetConnection(); SqlCommand cmd = new SqlCommand(sqlString, conn); XmlReader xr = cmd.ExecuteXmlReader(); Message msg = Message.CreateMessage( OperationContext.Current.IncomingMessageVersion, GetResponseAction(), xr); return msg; } I cannot close the connection within the method or the streaming of the response message will be terminated. Since control returns to the WCF system after the completion of that method, I don't know how I can close that connection afterwards. Any suggestions or pointers to additional documentation would be appreciated. Dan

    Read the article

  • Problem connecting to postgres with Kohana 3 database module on OS X Snow Leopard

    - by Bart Gottschalk
    Environment: Mac OS X 10.6 Snow Leopard PHP 5.3 Kohana 3.0.4 When I try to configure and use a connection to a postgresql database on localhost I get the following error: ErrorException [ Warning ]: mysql_connect(): [2002] No such file or directory (trying to connect via unix:///var/mysql/mysql.sock) Here is the configuration of the database in /modules/database/config/database.php (note the third instance named 'pgsqltest') return array ( 'default' => array ( 'type' => 'mysql', 'connection' => array( /** * The following options are available for MySQL: * * string hostname * string username * string password * boolean persistent * string database * * Ports and sockets may be appended to the hostname. */ 'hostname' => 'localhost', 'username' => FALSE, 'password' => FALSE, 'persistent' => FALSE, 'database' => 'kohana', ), 'table_prefix' => '', 'charset' => 'utf8', 'caching' => FALSE, 'profiling' => TRUE, ), 'alternate' => array( 'type' => 'pdo', 'connection' => array( /** * The following options are available for PDO: * * string dsn * string username * string password * boolean persistent * string identifier */ 'dsn' => 'mysql:host=localhost;dbname=kohana', 'username' => 'root', 'password' => 'r00tdb', 'persistent' => FALSE, ), 'table_prefix' => '', 'charset' => 'utf8', 'caching' => FALSE, 'profiling' => TRUE, ), 'pgsqltest' => array( 'type' => 'pdo', 'connection' => array( /** * The following options are available for PDO: * * string dsn * string username * string password * boolean persistent * string identifier */ 'dsn' => 'mysql:host=localhost;dbname=pgsqltest', 'username' => 'postgres', 'password' => 'dev1234', 'persistent' => FALSE, ), 'table_prefix' => '', 'charset' => 'utf8', 'caching' => FALSE, 'profiling' => TRUE, ), ); And here is the code to create the database instance, create a query and execute the query: $pgsqltest_db = Database::instance('pgsqltest'); $query = DB::query(Database::SELECT, 'SELECT * FROM test')->execute(); I'm continuing to research a solution for this error but thought I'd ask to see if someone else has already found a solution. Any ideas are welcome. One other note is that I know my build of PHP can access this postgresql db since I'm able to manage the db using phpPgAdmin. But I have yet to determine what phpPgAdmin is doing differently to connect to the db than what Kohana 3 is attempting. Bart

    Read the article

  • SQL SERVER Spatial Database Queries What About BLOB T-SQL Tuesday #006

    Michael Coles is one of the most interesting book authors I have ever met. He has a flair of writing complex stuff in a simple language. There are a very few people like that. I really enjoyed reading his recent book, Expert SQL Server 2008 Encryption. I strongly suggest taking a look at it. This [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Scaling Out the Distribution Database

    Replication is a great technology for moving data from one server to another, and it has a great many configuration options. David Poole brings us a technique for scaling out with multiple distribution databases.

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >