Search Results

Search found 113039 results on 4522 pages for 'database sql server'.

Page 148/4522 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • ELMAH with IIS6.0 and SQL Server 2005

    - by RajeshT
    On production I have ELMAH configured so that the errors are logged into a SQL Server 2005 database created specially for ELMAH. The sites run on IIS6.0 web server. ELMAH is able to send emails for the errors, but nothing is getting logged into the SQL Server database. The connection is a trusted connection and I have <section name="errorLog" requirePermission="false" type="Elmah.ErrorLogSectionHandler, Elmah"/> in section group <errorLog type="Elmah.SqlErrorLog, Elmah" connectionStringName="Elmah.Sql" applicationName="VBCPortal"/> under elmah <add name="Elmah.Sql" connectionString="Data Source=(local);Initial Catalog=ELMAH;Trusted_Connection=True" /> in connectionStrings in my web.config file. Any idea why the errors are not getting logged into the database?

    Read the article

  • Unaccounted for database size

    - by Nazadus
    I currently have a database that is 20GB in size. I've run a few scripts which show on each tables size (and other incredibly useful information such as index stuff) and the biggest table is 1.1 million records which takes up 150MB of data. We have less than 50 tables most of which take up less than 1MB of data. After looking at the size of each table I don't understand why the database shouldn't be 1GB in size after a shrink. The amount of available free space that SqlServer (2005) reports is 0%. The log mode is set to simple. At this point my main concern is I feel like I have 19GB of unaccounted for used space. Is there something else I should look at? Normally I wouldn't care and would make this a passive research project except this particular situation calls for us to do a backup and restore on a weekly basis to put a copy on a satellite (which has no internet, so it must be done manually). I'd much rather copy 1GB (or even if it were down to 5GB!) than 20GB of data each week. sp_spaceused reports the following: Navigator-Production 19184.56 MB 3.02 MB And the second part of it: 19640872 KB 19512112 KB 108184 KB 20576 KB while I've found a few other scripts (such as the one from two of the server database size questions here, they all report the same information either found above or below). The script I am using is from SqlTeam. Here is the header info: * BigTables.sql * Bill Graziano (SQLTeam.com) * graz@<email removed> * v1.11 The top few tables show this (table, rows, reserved space, data, index, unused, etc): Activity 1143639 131 MB 89 MB 41768 KB 1648 KB 46% 1% EventAttendance 883261 90 MB 58 MB 32264 KB 328 KB 54% 0% Person 113437 31 MB 15 MB 15752 KB 912 KB 103% 3% HouseholdMember 113443 12 MB 6 MB 5224 KB 432 KB 82% 4% PostalAddress 48870 8 MB 6 MB 2200 KB 280 KB 36% 3% The rest of the tables are either the same in size or smaller. No more than 50 tables. Update 1: - All tables use unique identifiers. Usually an int incremented by 1 per row. I've also re-indexed everything. I ran the dbcc shrink command as well as updating the usage before and after. And over and over. An interesting thing I found is that when I restarted the server and confirmed no one was using it (and no maintenance procs are running, this is a very new application -- under a week old) and when I went to run the shrink, every now and then it would say something about data changed. Googling yielded too few useful answers with the obvious not applying (it was 1am and I disconnected everyone, so it seems impossible that was really the case). The data was migrated via C# code which basically looked at another server and brought things over. The quantity of deletes, at this point in time, are probably under 50k in rows. Even if those rows were the biggest rows, that wouldn't be more than 100M I would imagine. When I go to shrink via the GUI it reports 0% available to shrink, indicating that I've already gotten it as small as it thinks it can go. Update 2: sp_spaceused 'Activity' yields this (which seems right on the money): Activity 1143639 134488 KB 91072 KB 41768 KB 1648 KB Fill factor was 90. All primary keys are ints. Here is the command I used to 'updateusage': DBCC UPDATEUSAGE(0); Update 3: Per Edosoft's request: Image 111975 2407773 19262184 It appears as though the image table believes it's the 19GB portion. I don't understand what this means though. Is it really 19GB or is it misrepresented? Update 4: Talking to a co-worker and I found out that it's because of the pages, as someone else here has also state the potential for that. The only index on the image table is a clustered PK. Is this something I can fix or do I just have to deal with it? The regular script shows the Image table to be 6MB in size. Update 5: I think I'm just going to have to deal with it after further research. The images have been resized to be roughly 2-5KB each and on a normal file system doesn't consume much space but on SqlServer it seems to consume considerably more. The real answer, in the long run, will likely be separating that table in to another partition or something similar.

    Read the article

  • Converting from SQL Server 2000 to 2005 for ASP.NET web App

    - by Bazza Formez
    Hi there, I'm moving my ASP.NET website to a new provider. Only problem is, old host support my SQL Server 2000 db. New host only supports SQL Server 2005. How should I go about the conversion ? Can I simply produce a backup of the 2000 (.bak) file at the old host, and restore that file into SQL Server 2005 at the new host ? Or is there more to it ?? Note that I don't own a copy of SQL Server 2005 at home... and I'm trying to avoid having to do so. Thanks, Bazza

    Read the article

  • SQL server text

    - by Nick P
    I will be taking an independent study class on SQL server management. I will have to configure SQL Server 2008 on a Windows Server 2008 system. I was wondering if anyone could suggest decent text for configuration/administration of SQL Server 2008. The Murach text doesn't look like it will take me far.

    Read the article

  • Sql Server copying table information between databases

    - by Andrew
    Hi, I have a script that I am using to copy data from a table in one database to a table in another database on the same Sql Server instance. The script works great when I am connected to the Sql Server instance as myself as I have dbo access to both databases. The problem is that this won't be the case on the client's Sql Server. They have seperate logins for each database (Sql Authentication Logins). Does anyone know if there is a way to run a script under these circumstances. The script would be doing something like. use sourceDB Insert targetDB.dbo.tblTest (id, test_name) Select id, test_name from dbo.tblTest Thanks

    Read the article

  • Oracle XE as local recovery database and Oracle Standard as main db

    - by jbendahan
    Hi, I just wanted to know what you guys think about this. I have an app written in Visual Basic .Net as my front end and and Oracle 11g Standart database as the back-end. So I have like 20 pc's running this app locally. They're all inserting, updating, deleting data on this single database. I want to develop a solution in the case that the server database crashes or cannot stay on line. So i think of having oracle 10g XE on each pc. Thus all the data will be stored in the local db. I think about running a proccess once in a while (ex. every 15 minutes) to send/get the data to/from the main server. What do you think about this strategy?

    Read the article

  • Database Engine not appearing in SQL Server listing

    - by Jonn
    I don't know if I'm searching for the wrong queries in google but I can't seem to find an answer to this. I have SQL Server 2008 installed in my pc and according to services.msc, I've got 2 database engines running: SQLEXPRESS (probably one that came along with Visual Studio) and MSSQLSERVER. When I try to connect only SQLEXPRESS is visible in the Server Name drop down list. I tried to explicitly state MSSQLSERVER by typing in MYPCNAME\MSSQLSERVER Didn't work. The best solution I could find in the internet was to enable stuff at Configuration Manager. Didn't work either (although I did find that TCP/VIA and all other options were disabled for MSSQLSERVER). Anyone have any other ideas on what I should try next or probably something that I overlooked?

    Read the article

  • Integrate Lucene or any other search product with SQL server 2005

    - by HBACHARYA
    Hi, I need to use full text search with SQL server 2005 and I have explored its inbuilt search approach (SQL server full text indexing) but it seems less powerful. I have also looked features of Lucene. Now my questions: Is is possible to integrate lucene and SQL server in anyway? 1. Can my T-Sql queries use Lucene index for returning results? (May be uses CLR based function internally) 2. How to update Lucene index while data in the tables are getting updated 3. What can be overall architecutre? 4. Are there any commercial products avaliable which provides this kind of support? Thanks, HB

    Read the article

  • Why the “Toilet” Analogy for SQL might be bad

    - by Jonathan Kehayias
    Robert Davis(blog/twitter) recently blogged The Toilet Analogy … or Why I Never Recommend Increasing Worker Threads , in which he uses an analogy for why increasing the value for the ‘max worker threads’ sp_configure option can be bad inside of SQL Server.  While I can’t make an argument against Robert’s assertion that increasing worker threads may not improve performance, I can make an argument against his suggestion that, simply increasing the number of logical processors, for example from...(read more)

    Read the article

  • Microsoft Tech-Ed North America 2010 - SQL Server Upgrade, 2000 - 2005 - 2008: Notes and Best Practi

    - by ssqa.net
    It is just a week to go for Tech-Ed North America 2010 in New Orleans, this time also I'm speaking at this conference on the subject - SQL Server Upgrade, 2000 - 2005 - 2008: Notes and Best Practices from the Field... more from here .. It is a coincedence that this is the 2nd time the same talk has been selected in Tech-Ed North America for the topic I have presented in SQLBits before....(read more)

    Read the article

  • Session memory – who’s this guy named Max and what’s he doing with my memory?

    - by extended_events
    SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine. In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula: max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.) input_memory_kb total_regular_buffers regular_buffer_size total_buffer_size 637 5 130867 654335 638 5 130867 654335 639 5 130867 654335 640 5 196403 982015 641 5 196403 982015 642 5 196403 982015 This is just a segment of the results that shows one of the “jumps” between the buffer boundary at 639 KB and 640 KB. You can verify the size boundary by doing the math on the regular_buffer_size field, which is returned in bytes: 196403 – 130867 = 65536 bytes 65536 / 1024 = 64 KB The relationship between the input for max_memory and when the regular_buffer_size is going to jump from one 64K boundary to the next is going to change based on the number of buffers being created. The number of buffers is dependent on the partition mode you choose. If you choose any partition mode other than NONE, the number of buffers will depend on your hardware configuration. (Again, see the earlier post referenced above.) With the default partition mode of none, you always get three buffers, regardless of machine configuration, so I generated a “range table” for max_memory settings between 1 KB and 4096 KB as an example. start_memory_range_kb end_memory_range_kb total_regular_buffers regular_buffer_size total_buffer_size 1 191 NULL NULL NULL 192 383 3 130867 392601 384 575 3 196403 589209 576 767 3 261939 785817 768 959 3 327475 982425 960 1151 3 393011 1179033 1152 1343 3 458547 1375641 1344 1535 3 524083 1572249 1536 1727 3 589619 1768857 1728 1919 3 655155 1965465 1920 2111 3 720691 2162073 2112 2303 3 786227 2358681 2304 2495 3 851763 2555289 2496 2687 3 917299 2751897 2688 2879 3 982835 2948505 2880 3071 3 1048371 3145113 3072 3263 3 1113907 3341721 3264 3455 3 1179443 3538329 3456 3647 3 1244979 3734937 3648 3839 3 1310515 3931545 3840 4031 3 1376051 4128153 4032 4096 3 1441587 4324761 As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Max approximates True as memory approaches 64K The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines. Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions. The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode. DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint) DECLARE @buf_size int, @part_mode varchar(8) SET @buf_size = 1 -- Set to the begining of your max_memory range (KB) SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB) BEGIN     BEGIN TRY         IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')             DROP EVENT SESSION buffer_size_test ON SERVER         DECLARE @session nvarchar(max)         SET @session = 'create event session buffer_size_test on server                         add event sql_statement_completed                         add target ring_buffer                         with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'         EXEC sp_executesql @session         SET @session = 'alter event session buffer_size_test on server                         state = start'         EXEC sp_executesql @session         INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)             SELECT @buf_size, total_regular_buffers, regular_buffer_size, total_buffer_size FROM sys.dm_xe_sessions WHERE name = 'buffer_size_test'     END TRY     BEGIN CATCH         INSERT @buf_size_output (input_memory_kb)             SELECT @buf_size     END CATCH     SET @buf_size = @buf_size + 1 END DROP EVENT SESSION buffer_size_test ON SERVER SELECT MIN(input_memory_kb) start_memory_range_kb, MAX(input_memory_kb) end_memory_range_kb, total_regular_buffers, regular_buffer_size, total_buffer_size from @buf_size_output group by total_regular_buffers, regular_buffer_size, total_buffer_size Thanks to Jonathan for an interesting question and a chance to explore some of the details of Extended Event internals. - Mike

    Read the article

  • Attention users running SQL Server 2008 & 2008 R2!

    - by AaronBertrand
    In April and May, Microsoft released cumulative updates for SQL Server 2008 and 2008 R2 (I blogged about them here and here ). They are: CU #11 for 2008 SP3 (10.00.5840) ( KB #2834048 ) CU #12 for 2008 R2 SP1 (10.50.2874) ( KB #2828727 ) CU #6 for 2008 R2 SP2 (10.50.4279) ( KB #2830140 ) Sometime after that, looks like the next day, both downloads were pulled, allegedly due to an index corruption issue (if you believe the commentary on the Release Services blog post for CU #6 ) or due to an issue...(read more)

    Read the article

  • Microsoft SQL Server High-Availability Videos and Q&A Log

    - by KKline
    You Want Videos? We Got Videos! I always enjoy getting the chance to catch up with author, consultant, and Microsoft Clustering MVP Allan Hirt . Allan and I recently presented two sessions covering an overview of high availability in Microsoft SQL Server and, the following week, a demo of how to implement several different kinds of high availability techniques including database mirroring, transactional replication, and Windows clustering services. You can see videos of these presentations at the...(read more)

    Read the article

  • Survey: Do you write custom SQL CLR procedures/functions/etc

    - by James Luetkehoelter
    I'm quite curious because despite the great capabilities of writing CLR-based stored procedures to off-load those nasty operations TSQL isn't that great at (like iteration, or complex math), I'm continuing to see a wealth of SQL 2008 databases with complex stored procedures and functions which would make great candidates. The in-house skill to create the CLR code exists as well, but there is flat out resistance to use it. In one scenario I was told "Oh, iteration isn't a problem because we've trained...(read more)

    Read the article

  • T-SQL Tuesday #31: Paradox of the Sawtooth Log

    - by merrillaldrich
    Today’s T-SQL Tuesday, hosted by Aaron Nelson ( @sqlvariant | sqlvariant.com ) has the theme Logging . I was a little pressed for time today to pull this post together, so this will be short and sweet. For a long time, I wondered why and how a database in Full Recovery Mode, which you’d expect to have an ever-growing log -- as all changes are written to the log file -- could in fact have a log usage pattern that looks like this: This graph shows the Percent Log Used (bold, red) and the Log File(s)...(read more)

    Read the article

  • SQL University: Parallelism Week - Introduction

    - by Adam Machanic
    Welcome to Parallelism Week at SQL University . My name is Adam Machanic, and I'm your professor. Imagine having 8 brains, or 16, or 32. Imagine being able to break up complex thoughts and distribute them across your many brains, so that you could solve problems faster. Now quit imagining that, because you're human and you're stuck with only one brain, and you only get access to the entire thing if you're lucky enough to have avoided abusing too many recreational drugs. For your database server,...(read more)

    Read the article

  • SQL Server 2008 R2 still requires a trace flag for Lock Pages in Memory

    - by AaronBertrand
    Almost two years ago, I blogged that Lock Pages in Memory was finally available to Standard Edition customers (Enterprise Edition customers had long been deemed smart enough to not abuse this feature). In addition to applying a cumulative update (2005 SP3 CU4 or 2008 SP1 CU2), in order to take advantage of LPIM, you also had to enable trace flag 845. Since the trace flag isn't documented for SQL Server 2008 R2, several of us in the community assumed that it was no longer required (since it was introduced...(read more)

    Read the article

  • Exploding maps in Reporting Services 2008 R2

    - by Rob Farley
    Kaboom! Well, that was the imagery that secretly appeared in my mind when I saw “USA By State Exploded” in the list of installed maps in Report Builder 3.0 – part of the spatial offering of SQL Server Reporting Server 2008 R2. Alas, it just means that the borders are bigger. Clicking on it showed me. Unfortunately, I’m not interested in maps of the US. None of my clients are there (at least, not yet – feel free to get in touch if you want to change this ‘feature’ of my company). So instead, I’ve recently been getting hold of some data for Australian areas. I’ve just bought some PostCode shapes for South Australia, and will use this in demos for conferences and for showing clients how this kind of report can really impact their reporting. One of the companies I was talking about getting shape files sent me a sample. So I chose the “ESRI shapefile” option you see above, and browsed to my file. It appeared in the window like this: Australians will immediately recognise this as the area around Wollongong, just south of Sydney. Well, apart from me. I didn’t. I had to put a Bing Maps layer behind it to work that out, but that’s not for this post. The thing that I discovered was that if I selected the Exploded USA option (but without clicking Next), and then chose my shape file, then my area around Wollongong would be exploded too! Huh! I think this is actually a bug, but a potentially useful one! Some further investigation (involving creating two identical reports, one with this exploded view, one without), showed that the Exploded View is done by reducing the ScaleFactor property of the PolygonLayer in the map control. The Exploded version has it below 1. If you set to above one, your shapes overlap. I discovered this by accident… I guess I hadn’t looked through all the PolygonLayer options to work out what they all do. And because this post is about Reporting, it can qualify for this month’s T-SQL Tuesday, hosted by Aaron Nelson (@sqlvariant). Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL University: Parallelism Week - Part 3, Settings and Options

    - by Adam Machanic
    Congratulations! You've made it back for the the third and final installment of Parallelism Week here at SQL University . So far we've covered the fundamentals of multitasking vs. parallel processing and delved into how parallel query plans actually work . Today we'll take a look at the settings and options that influence intra-query parallelism and discuss how best to set things up in various situations. Instance-Level Configuration Your database server probably has more than one logical processor....(read more)

    Read the article

  • SQL Server 2008 R2 still requires a trace flag for Lock Pages in Memory

    - by AaronBertrand
    Almost two years ago, I blogged that Lock Pages in Memory was finally available to Standard Edition customers (Enterprise Edition customers had long been deemed smart enough to not abuse this feature). In addition to applying a cumulative update (2005 SP3 CU4 or 2008 SP1 CU2), in order to take advantage of LPIM, you also had to enable trace flag 845. Since the trace flag isn't documented for SQL Server 2008 R2, several of us in the community assumed that it was no longer required (since it was introduced...(read more)

    Read the article

  • SQL Down Under podcast 60 with SQL Server MVP Adam Machanic

    - by Greg Low
    I managed to get another podcast posted over the weekend. Late last week, I managed to get a show recorded with Adam Machanic. Adam's always fascinating. In this show, he's talking about what he's found regarding increasing query performance using parallelism. Late in the show, he gives his thoughts on a number of topics related to the upcoming SQL Server 2014.Enjoy!The show is online now: http://www.sqldownunder.com/Podcasts 

    Read the article

  • T-SQL Tuesday # 16 : This is not the aggregate you're looking for

    - by AaronBertrand
    This week, T-SQL Tuesday is being hosted by Jes Borland ( blog | twitter ), and the theme is " Aggregate Functions ." When people think of aggregates, they tend to think of MAX(), SUM() and COUNT(). And occasionally, less common functions such as AVG() and STDEV(). I thought I would write a quick post about a different type of aggregate: string concatenation. Even going back to my classic ASP days, one of the more common questions out in the community has been, "how do I turn a column into a comma-separated...(read more)

    Read the article

  • Survey: Do you write custom SQL CLR procedures/functions/etc

    - by James Luetkehoelter
    I'm quite curious because despite the great capabilities of writing CLR-based stored procedures to off-load those nasty operations TSQL isn't that great at (like iteration, or complex math), I'm continuing to see a wealth of SQL 2008 databases with complex stored procedures and functions which would make great candidates. The in-house skill to create the CLR code exists as well, but there is flat out resistance to use it. In one scenario I was told "Oh, iteration isn't a problem because we've trained...(read more)

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >