Search Results

Search found 38535 results on 1542 pages for 'sql memory'.

Page 164/1542 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • How much free memory should I have on my webserver?

    - by neanderslob
    I have a webserver that's currently hosting two Wordpress sites and some java-based collaboration software. The server has 2G of memory and is currently using about 1.8G of the available memory. Right now what's on here is pretty much a pilot project that's getting negligible traffic so I think it's pretty clear that I'll be needing more memory. I was wondering, if I was to release it, how I might anticipate my memory needs based on the traffic it gets. I've poked around on Google and what I've found has been a bit tenuous. Is there a good heuristic that one should use when calculating memory demands as a function of the base (no traffic) load on the server? For reference, the output of free -m can be seen below: total used free shared buffers cached Mem: 2048 1832 215 0 0 0 -/+ buffers/cache: 1832 215 Swap: 0 0 0 To me this looks like actual memory used and isn't an illusion due to caching or anything else. I figure the demands of my collaboration software will have to be experimentally tested so here's free -m without that software running: total used free shared buffers cached Mem: 2048 1109 938 0 0 0 -/+ buffers/cache: 1109 938 Swap: 0 0 0 My plan B to figure this out is to add a bunch of swap space to the server, give it some traffic and adjust according the the amount that swap gets used. I was just wondering if anyone had a good rule of thumb to estimate how much memory I should plan on in advance...or if what I'm thinking is nuts. Many thanks in advance (I'm really quite new to this).

    Read the article

  • WPF 3.5 RenderTargetBitmap memory hog

    - by kingRauk
    I have a 3.5 WPF application that use's RenderTargetBitmap. It eat's memory like a big bear. It's is a know problem in 3.5 that RenderTargetBitmap.Render has memory problems. Have find some solutions for it, but i doesnt help. https://connect.microsoft.com/VisualStudio/feedback/details/489723/rendertargetbitmap-render-method-causes-a-memory-leak Program takes too much memory And more... Does anyway have any more ideas to solve it... static Image Method(FrameworkElement e, int width, int height) { const int dpi = 192; e.Width = width; e.Height = height; e.Arrange(new Rect(0, 0, width, height)); e.UpdateLayout(); if(element is Graph) (element as Graph).UpdateComponents(); var bitmap = new RenderTargetBitmap((int)(width*dpi/96.0), (int)(height*dpi/96.0), dpi, dpi, PixelFormats.Pbgra32); bitmap.Render(element); var encoder = new PngBitmapEncoder(); encoder.Frames.Add(BitmapFrame.Create(bitmap)); using (var stream = new MemoryStream()) { encoder.Save(stream); element.Clip = null; Dispose(element); bitmap.Freeze(); DisposeRender(bitmap); bitmap.Clear(); GC.Collect(); GC.WaitForPendingFinalizers(); return System.Drawing.Image.FromStream(stream); } } public static void Dispose(FrameworkElement element) { GC.Collect(); GC.WaitForPendingFinalizers(); GC.Collect(); } public static void DisposeRender(RenderTargetBitmap bitmap) { if (bitmap != null) bitmap.Clear(); bitmap = null; GC.Collect(); GC.WaitForPendingFinalizers(); }

    Read the article

  • Instantiating class with custom allocator in shared memory

    - by recipriversexclusion
    I'm pulling my hair due to the following problem: I am following the example given in boost.interprocess documentation to instantiate a fixed-size ring buffer buffer class that I wrote in shared memory. The skeleton constructor for my class is: template<typename ItemType, class Allocator > SharedMemoryBuffer<ItemType, Allocator>::SharedMemoryBuffer( unsigned long capacity ){ m_capacity = capacity; // Create the buffer nodes. m_start_ptr = this->allocator->allocate(); // allocate first buffer node BufferNode* ptr = m_start_ptr; for( int i = 0 ; i < this->capacity()-1; i++ ) { BufferNode* p = this->allocator->allocate(); // allocate a buffer node } } My first question: Does this sort of allocation guarantee that the buffer nodes are allocated in contiguous memory locations, i.e. when I try to access the n'th node from address m_start_ptr + n*sizeof(BufferNode) in my Read() method would it work? If not, what's a better way to keep the nodes, creating a linked list? My test harness is the following: // Define an STL compatible allocator of ints that allocates from the managed_shared_memory. // This allocator will allow placing containers in the segment typedef allocator<int, managed_shared_memory::segment_manager> ShmemAllocator; //Alias a vector that uses the previous STL-like allocator so that allocates //its values from the segment typedef SharedMemoryBuffer<int, ShmemAllocator> MyBuf; int main(int argc, char *argv[]) { shared_memory_object::remove("MySharedMemory"); //Create a new segment with given name and size managed_shared_memory segment(create_only, "MySharedMemory", 65536); //Initialize shared memory STL-compatible allocator const ShmemAllocator alloc_inst (segment.get_segment_manager()); //Construct a buffer named "MyBuffer" in shared memory with argument alloc_inst MyBuf *pBuf = segment.construct<MyBuf>("MyBuffer")(100, alloc_inst); } This gives me all kinds of compilation errors related to templates for the last statement. What am I doing wrong?

    Read the article

  • why pointer to pointer is needed to allocate memory in function

    - by skydoor
    Hi I have a segmentation fault in the code below, but after I changed it to pointer to pointer, it is fine. Could anybody give me any reason? void memory(int * p, int size) { try{ p = (int *) malloc(size*sizeof(int)); } catch( exception& e) { cout<<e.what()<<endl; } } it does not work in the main function as blow int *p = 0; memory(p, 10); for(int i = 0 ; i < 10; i++) p[i] = i; however, it works like this . void memory(int ** p, int size) { `//pointer to pointer` try{ *p = (int *) malloc(size*sizeof(int)); } catch( exception& e) { cout<<e.what()<<endl; } } int main() { int *p = 0; memory(&p, 10); //get the address of the pointer for(int i = 0 ; i < 10; i++) p[i] = i; for(int i = 0 ; i < 10; i++) cout<<*(p+i)<<" "; return 0; }

    Read the article

  • Shared Memory and Process Sempahores (IPC)

    - by fsdfa
    This is an extract from Advanced Liniux Programming: Semaphores continue to exist even after all processes using them have terminated. The last process to use a semaphore set must explicitly remove it to ensure that the operating system does not run out of semaphores.To do so, invoke semctl with the semaphore identifier, the number of semaphores in the set, IPC_RMID as the third argument, and any union semun value as the fourth argument (which is ignored).The effective user ID of the calling process must match that of the semaphore’s allocator (or the caller must be root). Unlike shared memory segments, removing a semaphore set causes Linux to deallocate immediately. If a process allocate a shared memory, and many process use it and never set to delete it (with shmctl), if all them terminate, then the shared page continues being available. (We can see this with ipcs). If some process did the shmctl, then when the last process deattached, then the system will deallocate the shared memory. So far so good (I guess, if not, correct me). What I dont understand from that quote I did, is that first it say: "Semaphores continue to exist even after all processes using them have terminated." and then: "Unlike shared memory segments, removing a semaphore set causes Linux to deallocate immediately."

    Read the article

  • When memory is actually freeded?

    - by zhyk
    Hello all. I'm trying to understand memory management stuff in Objective-C. If I see the memory usage listed by Activity Monitor, it looks like memory is not being freed (I mean column rsize). But in "Object Allocations" everything looks fine. Here is my simple code: #import <Foundation/Foundation.h> int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; NSInteger i, k=10000; while (k>0) { NSMutableArray *array = [[NSMutableArray alloc]init]; for (i=0;i<1000*k; i++) { NSString *srtring = [[NSString alloc] initWithString:@"string...."]; [array addObject:srtring]; [srtring release]; srtring = nil; } [array release]; array = nil; k-=500; } [NSThread sleepForTimeInterval:5]; [pool release]; return 0; } As for retain and release it's cool, everything is balanced. But rsize decreases only after quitting from this little program. Is it possible to "clean" memory somehow before quitting?

    Read the article

  • C when to allocate and free memory - before function call, after function call...etc

    - by Keith P
    I am working with my first straight C project, and it has been a while since I worked on C++ for that matter. So the whole memory management is a bit fuzzy. I have a function that I created that will validate some input. In the simple sample below, it just ignores spaces: int validate_input(const char *input_line, char* out_value){ int ret_val = 0; /*false*/ int length = strlen(input_line); cout << "length = " << length << "\n"; out_value =(char*) malloc(sizeof(char) * length + 1); if (0 != length){ int number_found = 0; for (int x = 0; x < length; x++){ if (input_line[x] != ' '){ /*ignore space*/ /*get the character*/ out_value[number_found] = input_line[x]; number_found++; /*increment counter*/ } } out_value[number_found + 1] = '\0'; ret_val = 1; } return ret_val; } Instead of allocating memory inside the function for out_value, should I do it before I call the function and always expect the caller to allocate memory before passing into the function? As a rule of thumb, should any memory allocated inside of a function be always freed before the function returns?

    Read the article

  • Intel MKL memory management and exceptions

    - by Andrew
    Hello everyone, I am trying out Intel MKL and it appears that they have their own memory management (C-style). They suggest using their MKL_malloc/MKL_free pairs for vectors and matrices and I do not know what is a good way to handle it. One of the reasons for that is that memory-alignment is recommended to be at least 16-byte and with these routines it is specified explicitly. I used to rely on auto_ptr and boost::smart_ptr a lot to forget about memory clean-ups. How can I write an exception-safe program with MKL memory management or should I just use regular auto_ptr's and not bother? Thanks in advance. EDIT http://software.intel.com/sites/products/documentation/hpc/mkl/win/index.htm this link may explain why I brought up the question UPDATE I used an idea from the answer below for allocator. This is what I have now: template <typename T, size_t TALIGN=16, size_t TBLOCK=4> class aligned_allocator : public std::allocator<T> { public: pointer allocate(size_type n, const void *hint) { pointer p = NULL; size_t count = sizeof(T) * n; size_t count_left = count % TBLOCK; if( count_left != 0 ) count += TBLOCK - count_left; if ( !hint ) p = reinterpret_cast<pointer>(MKL_malloc (count,TALIGN)); else p = reinterpret_cast<pointer>(MKL_realloc((void*)hint,count,TALIGN)); return p; } void deallocate(pointer p, size_type n){ MKL_free(p); } }; If anybody has any suggestions, feel free to make it better.

    Read the article

  • Preallocating memory with C++ in realtime environment

    - by Elazar Leibovich
    I'm having a function which gets an input buffer of n bytes, and needs an auxillary buffer of n bytes in order to process the given input buffer. (I know vector is allocating memory at runtime, let's say that I'm using a vector which uses static preallocated memory. Imagine this is NOT an STL vector.) The usual approach is void processData(vector<T> &vec) { vector<T> &aux = new vector<T>(vec.size()); //dynamically allocate memory // process data } //usage: processData(v) Since I'm working in a real time environment, I wish to preallocate all the memory I'll ever need in advance. The buffer is allocated only once at startup. I want that whenever I'm allocating a vector, I'll automatically allocate auxillary buffer for my processData function. I can do something similar with a template function static void _processData(vector<T> &vec,vector<T> &aux) { // process data } template<size_t sz> void processData(vector<T> &vec) { static aux_buffer[sz]; vector aux(vec.size(),aux_buffer); // use aux_buffer for the vector _processData(vec,aux); } // usage: processData<V_MAX_SIZE>(v); However working alot with templates is not much fun (now let's recompile everything since I changed a comment!), and it forces me to do some bookkeeping whenever I use this function. Are there any nicer designs around this problem?

    Read the article

  • How to optimize paging for large in memory database

    - by snakefoot
    I have an application where the entire database is implemented in memory using a stl-map for each table in the database. Each item in the stl-map is a complex object with references to other items in the other stl-maps. The application works with a large amount of data, so it uses more than 500 MByte RAM. Clients are able to contact the application and get a filtered version of the entire database. This is done by running through the entire database, and finding items relevant for the client. When the application have been running for an hour or so, then Windows 2003 SP2 starts to page out parts of the RAM for the application (Eventhough there is 16 GByte RAM on the machine). After the application have been partly paged out then a client logon takes a long time (10 mins) because it now generates a page fault for each pointer lookup in the stl-map. I can see it is possible to tell Windows to lock memory in RAM, but this is generally only recommended for device drivers, and only for "small" amounts of memory. I guess a poor mans solution could be to loop through the entire memory database, and thus tell Windows we are still interested in keeping the datamodel in RAM. I guess another poor mans solution could be to disable the pagefile completely on Windows. I guess the expensive solution would be a SQL database, and then rewrite the entire application to use a database layer. Then hopefully the database system will have implemented means to for fast access. Are there other more elegant solutions ?

    Read the article

  • How to access variables in shared memory

    - by user1723361
    I am trying to create a shared memory segment containing three integers and an array. The segment is created and a pointer is attached, but when I try to access the values of the variables (whether changing, printing, etc.) I get a segmentation fault. Here is the code I tried: #include <stdio.h> #include <stdbool.h> #include <stdlib.h> #include <errno.h> #include <sys/types.h> #include <sys/ipc.h> #include <sys/sem.h> #define SIZE 10 int* shm_front; int* shm_end; int* shm_count; int* shm_array; int shm_size = 3*sizeof(int) + sizeof(shm_array[SIZE]); int main(int argc, char* argsv[]) { int shmid; //create shared memory segment if((shmid = shmget(IPC_PRIVATE, shm_size, 0644)) == -1) { printf("error in shmget"); exit(1); } //obtain the pointer to the segment if((shm_front = (int*)shmat(shmid, (void *)0, 0)) == (void *)-1) { printf("error in shmat"); exit(1); } //move down the segment to set the other pointers shm_end = shm_front + 1; shm_count = shm_front + 2; shm_array = shm_front + 3; //tests on shm //*shm_end = 10; //gives segmentation fault //printf("\n%d", *shm_front); //gives segmentation fault //clean-up //get rid of shared memory shmdt(shm_front); shmctl(shmid, IPC_RMID, NULL); //printf("\n\n"); return 0; } I tried accessing the shared memory by dereferencing the pointer to the struct, but got a segmentation fault each time.

    Read the article

  • Identifying memory leaks in C++

    - by Dororo
    I've got the following bit of code, which I've narrowed down to be causing a memory leak (that is, in Task Manager, the Private Working Set of memory increases with the same repeated input string). I understand the concepts of heaps and stacks for memory, as well as the general rules for avoiding memory leaks, but something somewhere is still going wrong: while(!quit){ char* thebuffer = new char[210]; //checked the function, it isn't creating the leak int size = FuncToObtainInputTextFromApp(thebuffer); //stored in thebuffer string bufferstring = thebuffer; int startlog = bufferstring.find("$"); int endlog = bufferstring.find("&"); string str_text=""; str_text = bufferstring.substr(startlog,endlog-startlog+1); String^ str_text_m = gcnew String(str_text_m.c_str()); //some work done delete str_text_m; delete [] thebuffer; } The only thing I can think of is it might be the creation of 'string str_text' since it never goes out of scope since it just reloops in the while? If so, how would I resolve that? Defining it outside the while loop wouldn't solve it since it'd also remain in scope then too. Any help would be greatly appreciated.

    Read the article

  • Migrating from SQL Trace to Extended Events

    - by extended_events
    In SQL Server codenamed “Denali” we are moving our diagnostic tracing capabilities forward by building a system on top of Extended Events. With every new system you face the specter of migration which is always a bit of a hassle. I’m obviously motivated to see everyone move their diagnostic tracing systems over to the new extended events based system, so I wanted to make sure we lowered the bar for the migration process to help ease your trials. In my initial post on Denali CTP 1 I described a couple tables that we created that will help map the existing SQL Trace Event Classes to the equivalent Extended Events events. In this post I’ll describe the tables in a bit more details, explain the relationship between the SQL Trace objects (Event Class & Column) and Extended Event objects (Events & Actions) and at the end provide some sample code for a managed stored procedure that will take an existing SQL Trace session (eg. a trace that you can see in sys.Traces) and converts it into event session DDL. Can you relate? In some ways, SQL Trace and Extended Events is kind of like the Standard and Metric measuring systems in the United States. If you spend too much time trying to figure out how to convert between the two it will probably make your head hurt. It’s often better to just use the new system without trying to translate between the two. That said, people like to relate new things to the things they’re comfortable with, so, with some trepidation, I will now explain how these two systems are related to each other. First, some terms… SQL Trace is made up of Event Classes and Columns. The Event Class occurs as the result of some activity in the database engine, for example, SQL:Batch Completed fires when a batch has completed executing on the server. Each Event Class can have any number of Columns associated with it and those Columns contain the data that is interesting about the Event Class, such as the duration or database name. In Extended Events we have objects named Events, EventData field and Actions. The Event (some people call this an xEvent but I’ll stick with Event) is equivalent to the Event Class in SQL Trace since it is the thing that occurs as the result of some activity taking place in the server. An  EventData field (from now on I’ll just refer to these as fields) is a piece of information that is highly correlated with the event and is always included as part of the schema of an Event. An Action is something that can be associated with any Event and it will cause some additional “action” to occur when ever the parent Event occurs. Actions can do a number of different things for example, there are Actions that collect additional data and, take memory dumps. When mapping SQL Trace onto Extended Events, Columns are covered by a combination of both fields and Actions. Knowing exactly where a Column is covered by a field and where it is covered by an Action is a bit of an art, so we created the mapping tables to make you an Artist without the years of practice. Let me draw you a map. Event Mapping The table dbo.trace_xe_event_map exists in the master database with the following structure: Column_name Type trace_event_id smallint package_name nvarchar xe_event_name nvarchar By joining this table sys.trace_events using trace_event_id and to the sys.dm_xe_objects using xe_event_name you can get a fair amount of information about how Event Classes are related to Events. The most basic query this lends itself to is to match an Event Class with the corresponding Event. SELECT     t.trace_event_id,     t.name [event_class],     e.package_name,     e.xe_event_name FROM sys.trace_events t INNER JOIN dbo.trace_xe_event_map e     ON t.trace_event_id = e.trace_event_id There are a couple things you’ll notice as you peruse the output of this query: For the most part, the names of Events are fairly close to the original Event Class; eg. SP:CacheMiss == sp_cache_miss, and so on. We’ve mostly stuck to a one to one mapping between Event Classes and Events, but there are a few cases where we have combined when it made sense. For example, Data File Auto Grow, Log File Auto Grow, Data File Auto Shrink & Log File Auto Shrink are now all covered by a single event named database_file_size_change. This just seemed like a “smarter” implementation for this type of event, you can get all the same information from this single event (grow/shrink, Data/Log, Auto/Manual growth) without having multiple different events. You can use Predicates if you want to limit the output to just one of the original Event Class measures. There are some Event Classes that did not make the cut and were not migrated. These fall into two categories; there were a few Event Classes that had been deprecated, or that just did not make sense, so we didn’t migrate them. (You won’t find an Event related to mounting a tape – sorry.) The second class is bigger; with rare exception, we did not migrate any of the Event Classes that were related to Security Auditing using SQL Trace. We introduced the SQL Audit feature in SQL Server 2008 and that will be the compliance and auditing feature going forward. Doing this is a very deliberate decision to support separation of duties for DBAs. There are separate permissions required for SQL Audit and Extended Events tracing so you can assign these tasks to different people if you choose. (If you’re wondering, the permission for Extended Events is ALTER ANY EVENT SESSION, which is covered by CONTROL SERVER.) Action Mapping The table dbo.trace_xe_action_map exists in the master database with the following structure: Column_name Type trace_column_id smallint package_name nvarchar xe_action_name nvarchar You can find more details by joining this to sys.trace_columns on the trace_column_id field. SELECT     c.trace_column_id,     c.name [column_name],     a.package_name,     a.xe_action_name FROM sys.trace_columns c INNER JOIN    dbo.trace_xe_action_map a     ON c.trace_column_id = a.trace_column_id If you examine this list, you’ll notice that there are relatively few Actions that map to SQL Trace Columns given the number of Columns that exist. This is not because we forgot to migrate all the Columns, but because much of the data for individual Event Classes is included as part of the EventData fields of the equivalent Events so there is no need to specify them as Actions. Putting it all together If you’ve spent a bunch of time figuring out the inner workings of SQL Trace, and who hasn’t, then you probably know that the typically set of Columns you find associated with any given Event Class in SQL Profiler is not fix, but is determine by the contents of the table sys.trace_event_bindings. We’ve used this table along with the mapping tables to produce a list of Event + Action combinations that duplicate the SQL Profiler Event Class definitions using the following query, which you can also find in the Books Online topic How To: View the Extended Events Equivalents to SQL Trace Event Classes. USE MASTER; GO SELECT DISTINCT    tb.trace_event_id,    te.name AS 'Event Class',    em.package_name AS 'Package',    em.xe_event_name AS 'XEvent Name',    tb.trace_column_id,    tc.name AS 'SQL Trace Column',    am.xe_action_name as 'Extended Events action' FROM (sys.trace_events te LEFT OUTER JOIN dbo.trace_xe_event_map em    ON te.trace_event_id = em.trace_event_id) LEFT OUTER JOIN sys.trace_event_bindings tb    ON em.trace_event_id = tb.trace_event_id LEFT OUTER JOIN sys.trace_columns tc    ON tb.trace_column_id = tc.trace_column_id LEFT OUTER JOIN dbo.trace_xe_action_map am    ON tc.trace_column_id = am.trace_column_id ORDER BY te.name, tc.name As you might imagine, it’s also possible to map an existing trace definition to the equivalent event session by judicious use of fn_trace_geteventinfo joined with the two mapping tables. This query extracts the list of Events and Actions equivalent to the trace with ID = 1, which is most likely the Default Trace. You can find this query, along with a set of other queries and steps required to migrate your existing traces over to Extended Events in the Books Online topic How to: Convert an Existing SQL Trace Script to an Extended Events Session. USE MASTER; GO DECLARE @trace_id int SET @trace_id = 1 SELECT DISTINCT el.eventid, em.package_name, em.xe_event_name AS 'event'    , el.columnid, ec.xe_action_name AS 'action' FROM (sys.fn_trace_geteventinfo(@trace_id) AS el    LEFT OUTER JOIN dbo.trace_xe_event_map AS em       ON el.eventid = em.trace_event_id) LEFT OUTER JOIN dbo.trace_xe_action_map AS ec    ON el.columnid = ec.trace_column_id WHERE em.xe_event_name IS NOT NULL AND ec.xe_action_name IS NOT NULL You’ll notice in the output that the list doesn’t include any of the security audit Event Classes, as I wrote earlier, those were not migrated. But wait…there’s more! If this were an infomercial there’d by some obnoxious guy next to me blogging “Well Mike…that’s pretty neat, but I’m sure you can do more. Can’t you make it even easier to migrate from SQL Trace?”  Needless to say, I’d blog back, in an overly excited way, “You bet I can' obnoxious blogger side-kick!” What I’ve got for you here is a Extended Events Team Blog only special – this tool will not be sold in any store; it’s a special offer for those of you reading the blog. I’ve wrapped all the logic of pulling the configuration information out of an existing trace and and building the Extended Events DDL statement into a handy, dandy CLR stored procedure. Once you load the assembly and register the procedure you just supply the trace id (from sys.traces) and provide a name for the event session. Run the procedure and out pops the DDL required to create an equivalent session. Any aspects of the trace that could not be duplicated are included in comments within the DDL output. This procedure does not actually create the event session – you need to copy the DDL out of the message tab and put it into a new query window to do that. It also requires an existing trace (but it doesn’t have to be running) to evaluate; there is no functionality to parse t-sql scripts. I’m not going to spend a bunch of time explaining the code here – the code is pretty well commented and hopefully easy to follow. If not, you can always post comments or hit the feedback button to send us some mail. Sample code: TraceToExtendedEventDDL   Installing the procedure Just in case you’re not familiar with installing CLR procedures…once you’ve compile the assembly you can load it using a script like this: -- Context to master USE master GO -- Create the assembly from a shared location. CREATE ASSEMBLY TraceToXESessionConverter FROM 'C:\Temp\TraceToXEventSessionConverter.dll' WITH PERMISSION_SET = SAFE GO -- Create a stored procedure from the assembly. CREATE PROCEDURE CreateEventSessionFromTrace @trace_id int, @session_name nvarchar(max) AS EXTERNAL NAME TraceToXESessionConverter.StoredProcedures.ConvertTraceToExtendedEvent GO Enjoy! -Mike

    Read the article

  • What would be the optimal disk config for SQL Server 2008 R2?

    - by Kev
    We have a new Dell R710 server that came with the following storage configuration: 8 x 146GB SAS 10k 6Gbps disks 1 x Perc H700 Integrated Controller (2 x 4 disks - 2 ports each supporting 4 disks) What would be the optimal configuration if we were just after performance? What would be the optimal configuration if we were after performance but wanted data resilience. As per 2 above but with a hot standby disk? We plan to run Windows 2008 R2 and SQL Server 2008 R2. Maximising storage capacity isn't a prime concern.

    Read the article

  • How can I grant read-only access to my SQL Server 2008 database?

    - by Adrian Grigore
    Hi, I'm trying to grant read-only access (in other words: select queries only) to a user account on my SQL Server 2008 R2 database. Which rights do I have to grant to the user to make this work? I've tried several kinds of combinations of permissions on the server and the database itself, but in all cases the user could still run update queries or he could not run any queries (not even select) at all. The error message I always got was The server principal "foo" is not able to access the database "bar" under the current security context. Thanks for your help, Adrian

    Read the article

  • Is there a simple way to backup and restore all Microsoft SQL Server database objects related to a p

    - by Nathan Hartley
    I would like to backup, not only the databases that belong to a particular application living on a shared server, but also, those things that get stored outside of the database; the server accounts, jobs, maintenance plans and whatever else I can't think of at the moment. This backup should be complete enough that it's corresponding restore will recreate the entire application on a different SQL server. This seems like a problem others must have dealt with in the past. So before I embark on creating custom Powershell scripts for each application, I have come to ask you... Can you help?

    Read the article

  • How to configure/save layout of SQL Server's Log File Viewer?

    - by gernblandston
    When I'm viewing the job history of a particular SQL Agent Job, I typically want to see whether it succeeded, its duration and maybe the duration of the individual steps of the job. When I open the history in the Log File Viewer, I always need to scroll over and shrink the 'Message' column and drag the 'Duration' column over next to the 'Step Name' column. Is there a way to configure the layout of the Log File Viewer (e.g. reposition columns, resize columns) and save it for future sessions? Thanks!

    Read the article

  • How do I capture a 10053 trace for a SQL statement called in a PL/SQL package?

    - by Maria Colgan
    Traditionally if you wanted to capture an Optimizer trace (10053) for a SQL statement you would issue an alter session command to switch on a 10053 trace for that entire session, and then issue the SQL statement you wanted to capture the trace for. Once the statement completed you would exit the session to disable the trace. You would then look in the USER_DUMP_DEST directory for the trace file. But what if the SQL statement you were interested  in was actually called as part of a PL/SQL package? Oracle Database 11g, introduced a new diagnostic events infrastructure, which greatly simplifies the task of generating a 10053 trace for a specific SQL statement in a PL/SQL package. All you will need to know is the SQL_ID for the statement you are interested in. Instead of turning on the trace event for the entire session you can now switch it on for a specific SQL ID. Oracle will then capture a 10053 trace for the corresponding SQL statement when it is issued in that session. Remember the SQL statement still has to be hard parsed for the 10053 trace to be generated.  Let's begin our example by creating a PL/SQL package called 'cal_total_sales'. The SQL statement we are interested in is the same as the one in our original example, SELECT SUM(AMOUNT_SOLD) FROM SALES WHERE CUST_ID = :B1. We need to know the SQL_ID of this SQL statement to set up the trace, and we can find in V$SQL. We now have everything we need to generate the trace. Finally  you would look in the USER_DUMP_DEST directory for the trace file with the name you specified. Maria Colgan+

    Read the article

  • SQL Server 2008 CPU usage goes to 100% - troubleshooting help needed?

    - by Ali
    I have a fairly powerful database server having SQL Server 2008 R2 installed. There is only one database on it which is being accessed from 2 servers (around 5/6 applications). The problem is as soon as applications start pointing to the database, system's CPU usage goes upto 100% with sqlserver itself using 95+%. I have checked profiler, there aren't any heavy queries running there. I have checked active connections, they are hardly 150. Still CPU usage is around 100% and applications are experiencing slow response/connection to database server are getting refused. Database grus, I really need some ideas here. Thanks.

    Read the article

  • Configuring SQL Server Management Studio to use Windows Integrated Authentication &hellip; from non-

    - by Enrique Lima
    Did you know you can pass your Windows credentials to SQL Server even when working from a workstation that is not joined to a domain? Here is how … From Start, then click All Programs, find Microsoft SQL Server (version 2005 or 2008). Once there, do a right-click on SQL Server Management Studio, then click on Properties Now, follow below to modify the entry for Target: Now the real task (we will be using the runas command) … Modify the shortcut’s target as follows, and remember to replace <domain\user> with the values that correspond to your environment : x64 SQL Server 2008 C:\Windows\System32\runas.exe /user:<domain\user> /netonly "C:\Program Files (x86)\Microsoft SQL Server\100\Tools\binn\VSShell\Common7\IDE\Ssms.exe -nosplash" SQL Server 2005 C:\Windows\System32\runas.exe /user:<domain\user> /netonly "C:\Program Files (x86)\Microsoft SQL Server\90\Tools\binn\VSShell\Common7\IDE\SqlWb.exe -nosplash" x86 SQL Server 2008 C:\Windows\System32\runas.exe /user:<domain\user> /netonly "C:\Program Files\Microsoft SQL Server\100\Tools\binn\VSShell\Common7\IDE\Ssms.exe -nosplash" SQL Server 2005 C:\Windows\System32\runas.exe /user:<domain\user> /netonly "C:\Program Files\Microsoft SQL Server\90\Tools\binn\VSShell\Common7\IDE\SqlWb.exe -nosplash" Since we modified the shortcut, we will need to fix the icon for SSMS.  We will fix it by pressing the Change Icon… button and pointing to the original “icon” providers. It is the executables for SSMS that hold the icon information, so we need to point to … x64 SQL Server 2008 %ProgramFiles% (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe SQL Server 2005 C:\Program Files (x86)\Microsoft SQL Server\90\Tools\binn\VSShell\Common7\IDE\SqlWb.exe x86 SQL Server 2008 %ProgramFiles%\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe SQL Server 2005 C:\Program Files\Microsoft SQL Server\90\Tools\binn\VSShell\Common7\IDE\SqlWb.exe When you start SSMS from a modified shortcut, you’ll be prompted for your domain password: SSMS will show up stating a different account in the username box, but the parameters from the configuration you are doing above do work and will pass on correctly.

    Read the article

  • SQL Server tempdb size seems large, is this normal?

    - by Abe Miessler
    From what I understand the system database is used to hold temporary tables, intermediate results and other temporary information. On one of my database instances I have a tempdb that is seems very large (30GB). This database has not been modified (as in "last modified date" on the mdf file) in over a week. Is it normal to have the temp db remain that large for that long of a period? It seems to me that it should be updating fairly often and returning space that it is using fairly quickly... Am I way off here or is SQL Server doing something weird? FYI: This is a SharePoint 2010 database, not sure if that makes a difference.

    Read the article

  • SQL Server 2012 and SQLMail - will it still work?

    - by Kharlos Dominguez
    We are considering upgrading our SQL Server, which is currently running 2005. We use SQLMail heavily in the organization, both to send e-mails and to import some into a database. I've read on various places that SQLMail was deprecated and superseded by "Database Mail". I'm confused because this MS page: http://msdn.microsoft.com/en-us/library/bb402904.aspx seems to imply that it would still work? I understand the dangers of SQLMail but we do not have the resources to rewrite the scripts right now and would prefer to do it later on. Does SQLMail still work in 2012, and if not, how easy is it to replace with Database Mail, both for reading and sending e-mails?

    Read the article

  • Nested Entities and calculation on leaf entity property - SQL or NoSQL approach

    - by Chandu
    I am working on a hobby project called Menu/Recipe Management. This is how my entities and their relations look like. A Nutrient has properties Code and Value An Ingredient has a collection of Nutrients A Recipe has a Collection of Ingredients and occasionally can have a collection of other recipes A Meal has a Collection of Recipes and Ingredients A Menu has a Collection of Meals The relations can be depicted as In one of the pages, for a selected menu I need to display the effective nutrients information calculated based on its constituents (Meals, Recipes, Ingredients and the corresponding nutrients). As of now am using SQL Server to store the data and I am navigating the chain from my C# code, starting from each meal of the menu and then aggregating the nutrient values. I think this is not an efficient way as this calculation is being done every time the page is requested and the constituents change occasionally. I was thinking about a having a background service that maintains a table called MenuNutrients ({MenuId, NutrientId, Value}) and will populate/update this table with the effective nutrients when any of the component (Meal, Recipe, Ingredient) changes. I feel that a GraphDB would be a good fit for this requirement, but my exposure to NoSQL is limited. I want to know what are the alternative solutions/approaches to this requirement of displaying the nutrients of a given menu. Hope my description of the scenario is clear.

    Read the article

  • SQL Server 2012 Service Pack 2 Cumulative Update #1 is available!

    - by AaronBertrand
    The SQL Server team has released SQL Server 2012 SP2 Cumulative Update #1. This cumulative updates Service Pack 2 to include the fixes from SP1 CU#10 and a few from CU#11, including the fix for the online index rebuild corruption issue I discussed recently on SQLPerformance.com . It also marks the first time in the SQL Server 2012 timeframe that both cumulative update branches are on roughly the same schedule, which makes many of us happy I'm sure. :-) KB Article: KB #2976982 Build # is 11.0.5532...(read more)

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >