Search Results

Search found 31421 results on 1257 pages for 'software performance'.

Page 798/1257 | < Previous Page | 794 795 796 797 798 799 800 801 802 803 804 805  | Next Page >

  • JBoss 4.2.3 Won't Start

    - by Thody
    Hi, I'm trying to start a new installation of JBoss 4.2.3, and it's getting as far as "INFO [Server] Core system initialized", then hanging for several minutes. There is a Java process running, but only at ~35%. Also, looking at the boot.log, there are no entries after ~1s after starting the boot. Any ideas what might be up? Update: After about 10 minutes, I got a handful of garbage collection warnings: GC Warning: Repeated allocation of very large block (appr. size 512000): May lead to memory leak and poor performance.

    Read the article

  • What are the pros & cons of these MySQL engines for OLTP -- XtraDB, PBXT, or TokuDB?

    - by Continuation
    I'm working on a social website with an approximate read/write split of 90/10. Trying to decide on a MySQL engine. The ones I'm interested in are: XtraDB PBXT TokuDB What are the pros and cons of them for my use case? A few specific questions: PBXT uses log-based structure that avoids double-writes. It sounds very elegant, but the benchmark I've seen doesn't show any/much advantages over XtraDB. Do you have any experience with PBXT/XtraDB you can share? TokuDB sounds VERY interesting. But all the benchmarks I've seen are about single-threaded bulk inserts - inserting 100M rows for example. that's not very relevant for OLTP. What about its performance with large number of concurrent threads writing and reading at the same time? Anyone has tried that?

    Read the article

  • How to enable Integrated Load Balancing in Windows 8

    - by intodevelopment
    One of the cool new features many sites mention is "Integrated Load Balancing" in Windows 8. It is supposed to do the following: If you have multiple active network connections, Windows 8 will intelligently balance the network traffic between them for performance Source: Short: Some of what Microsoft didn’t show of Windows 8 I would love to see this feature in action. I am connected to a WiFi access point and to a different network by cable. Yet I don't see the operating system 'intelligently' balance the network load when for instance downloading multiple files. Is there any way to enable it or should I adjust my expectations?

    Read the article

  • Everything You Ever Wanted to Know about Mod_Rewrite Rules but Were Afraid to Ask?

    - by Kyle Brandt
    How can I become an expert at writing mod_rewrite rules? What is the fundamental format and structure of mod_rewrite rules? What form/flavor of regular expressions do I need to have a solid grasp of? What are the most common mistakes/pitfalls when writing rewrite rules? What is a good method for testing and verifying mod_rewrite rules? Are there SEO or performance implications of mod_rewrite rules I should be aware of? Are there common situations where mod_rewrite might seem like the right tool for the job but isn't? What are some common examples?

    Read the article

  • Windows 8 Task Manager RAM Usage Accuracy

    - by user264892
    The new Task Manager has a great UI in windows 8, however, there are some discrepancies in the data I can not account for: Machine: 8 GB of total ram. (This is a physical machine, not a virtual) The processes tab shows 45% of Memory utilized. The listed process do not add up to 3.5 GB of RAM, but instead add up to 0.948 GB. There is no "processes for all users" option. The performance Tab Shows: In use : 3.6 GB Available: 4.4 GB Committed : 4.1 /9.2 GB Cached: 3.7 GB Paged Pool: 376 MB Non-paged pool: 135 MB My reading of this says I have ALOT of "cloaked" processes running some where eating my ram. How do I interpret this data and how do I verify it?

    Read the article

  • Can an SSD notify the hosting OS that its wear level is getting high?

    - by Tony_Henrich
    I read a lot about SSDs and I am interested in them for server use. My biggest concern is their reliability. A lot of writes shortens their life span. I can mitigate this problem if I can run some kind of diagnostics on a regular basis on the SSD or if the SSD can automatically warn the OS that its reliability is reaching a critical level. Think of this as S.M.A.R.T or software like SpinRite for SSDs. Does anything I mentioned exist now? Which kind/brand of SSD does this? I don't mind swapping out a tired SSD for a newer one once a while. I am pretty sure that SSDs life is calculated in years and not in few months? For me, the improved performance will pay for the SSD over and over. I am planning to use plenty of RAM as well.

    Read the article

  • How does Apache process several requests at a time?

    - by Vicenç Gascó
    In a short question: If 10 requests hit Apache, does it process them one by one, so when R3 finishes, then it starts to run R4, or does it fire 10 processes/threads/whatever and are resolved simultaneously? Now some background: I have a PHP script that takes up to two minutes to do some processes. My question is: while a client is waiting for this 2 minutes, all the other clients requests are being processed? Or also waiting for this one to end? By the way, if there are simultaneous request, how can I handle them? Let's say put a limit on them. Or a limit on resources consumed. For instance I want the server to use its 80% performance on serving the webapp, and just a 20% for those long operations ... because I have no hurry to end them. Doesn't know if it cares but is all in PHP. Thanks in advance!!

    Read the article

  • Should I never put a transactional replication distributor on a subscriber server?

    - by Stuart Branham
    What factors into choosing a distribution server for transactional replication? In our topology, we've always had the distributor reside on the publishing server. We rarely generate snapshots and performance is good enough, so this is okay for us today. One of our instances is moving to a cluster, so we need to move the distributor off for resilience/symmetry. Right now our two choices are to use a server physically close to the publishers, or our single subscription server. Our publisher is in our main office, and our subscriber is in a colocation facility off-site which our ISP runs. We have a pretty good line to it. The reason we're even considering the latter is to save work and licensing costs.

    Read the article

  • How to monitor a folder and trigger a command-line action when a file is created or edited?

    - by bigmattyh
    I need to set up some sort of a script on my Vista machine, so that whenever a file is added to a particular folder, it automatically triggers a background process that operates on the file. (The background process is just a command-line utility that takes the file name as an argument, along with some other predefined options.) I'd like to do this using native Windows features, if possible, for performance and maintenance reasons. I've looked into using Task Scheduler, but after perusing the trigger system for a while, I haven't been able to make much sense of it, and I'm not even sure if it's capable of doing what I need. I'd appreciate any suggestions. Thanks!

    Read the article

  • Would SSD drives benefit from a non-default allocation unit size?

    - by davebug
    The default allocation unit size recommended when formatting a drive in our current set-up is 4096 bytes. I understand the basics of the pros and cons of larger and smaller sizes (performance boost vs. space preservation) but it seems the benefits of a solid state drive (seek times massively lower than hard disks) may create a situation where a much smaller allocation size is not detrimental. Were this the case it would at least partially help to overcome the disadvantage of SSD (massively higher prices per GB). Is there a way to determine the 'cost' of smaller allocation sizes specifically related to seek times? Or are there any studies or articles recommending a change from the default based on this newer tech? (Assume the most average scattering of sizes program files, OS files, data, mp3s, text files, etc.)

    Read the article

  • accessing external mysql server through "ssh tunnel" - any drawbacks?

    - by Max
    In an upcoming project I have a two server setup: one is the application server and another, already existing runs the mysql server with databases I need to access. I contacted the server admin of the mysql server and the only way I can access the remote mysql databases is via "SSH tunnel". I have never done this before and never heard of it so far, so my question, are there any drawbacks, e. g. performance wise? Isnt it rather slow compared to directly accessing the mysql server on its default port?

    Read the article

  • How do I sync a subset of tables between two databases on the same mysql database server

    - by Mike
    would like to be able to sync a subset of tables between two mysql databases that are running on the same server. One of the databases acts as the master where inserts, updates and deletes can be made. The second database uses those same tables for read-only operations. I do not want to use federated tables to achieve this. The long term goal will be to separate the 2 databases to multiple servers, The second database that has the subset of tables as read-only may also be replicated a few times over to distribute geographically for load and performance purposes each with unqiue data.... Once that is achieved, I plan to use binlog to replicate those specific tables on the secondary databases. In the meantime, I'd like to keep these tables in sync. Is there a more elegant way to do this than other than using a cronjob and mysqldump?

    Read the article

  • Best virtualization solution for running Linux under Mac OS X?

    - by grumbles
    I'd like to run a virtualized Ubuntu instance under Mac OS X (10.6). I've used VirtualBox in the past, but am looking for something that will be faster, and don't mind paying for either Parallels Desktop or VMWare Fusion. Does anyone have experience running Linux guests under either or both programs? I'm primarily interested in doing software development on the Linux guest installation, but I'm also very concerned with the performance and responsiveness the guest OS. I have a mid-2010 15" MacBook Pro (2.66 GHz i5, 8 GB of RAM, NVIDIDA GeForce GT 330M). Thanks!

    Read the article

  • non-mapped virtual memory & total number of connections

    - by tszming
    We have two MongoDB data nodes (replica set) - Primary & Secondary. I noticed that the non-mapped virtual memory is relatively high and wondering if they are hurting our MongoDB performance (The server usually peaked at around 6-7K queries per sec). In MMS, it was stated: "The most common case of usage of a high amount of memory for non-mapped is that there are very many connections to the database." So we checked the memory usage with db.serverStatus().mem in our Secondary: { "bits" : 64, "resident" : 6846, "virtual" : 416797, "supported" : true, "mapped" : 205549, "mappedWithJournal" : 411098, "note" : "virtual minus mapped is large. could indicate a memory leak" } Note: We are using 2.0.4 and now the default stack size should be 1MB per connection. The current number of connections is around 1.1K, but the non-mapped virtual memory (virtual-mappedWithJournal) is around 5699 MB. The trend is quite stable so I can't say there is a leak here, but where is the memory gone? Any idea?

    Read the article

  • Which UPS, for home use, which can automatically shutdown the PC when battery is low?

    - by Tony_Henrich
    I am planning to build a single server machine which is highly dependent on data residing in RAM for performance reasons. I am looking for a UPS which can power the PC during a blackout (very rare) and when the battery level is like %20, it sends a signal to Windows 2008 to shutdown. Even if it's only for a few minutes, that's good enough. This is for home use so I am looking for an inexpensive unit ( less than $300? ). Which brand/model is good choice? I prefer one whose battery is easily & cheaply replaced.

    Read the article

  • SQL and IIS HDDs configuration on server

    - by john_1234
    Hi, I've just added a new production server and I was wondering if you guys could help me decide which configuration suits best. Current configuration: 40GB ~ C (System) 250GB ~ D (SQL - MDF & LDF) 250GB ~ F (IIS) 1TB ~ E (storage of users' files) (note: C and D are partitions on the same physical HDD) I've heard splitting LDF/MDF can do magic in terms of performance. Therefore, the core of my question is how would you recommend to do so. For example, putting the MDF with the IIS is an option, yet I'm not so sure about it.

    Read the article

  • What do desktop users use SSD for? [closed]

    - by Continuation
    I keep hearing people say how much faster their desktops/laptops are after switching to SSD. But then when I ask them what do they use SSD for, the answers are always "booting". Not trying to start a flame war here. I use SSD for my database server and it makes a huge difference. A single SSD can replace a 10 drives RAID 10 and is actually both much faster and MUCH CHEAPER. But I just can't think of a usecase for desktops where SSD could make a similar impact. Sure it's kinda nice to cut boot time down from 40 seconds to 10 seconds. But is it that big a deal? Would love to hear how SSD improves your desktop performance.

    Read the article

  • IPSec 2 hosts (preshared key) - network shares very slow

    - by LxFlip
    I'm testing a IPSec config between 2 hosts, using ipsec auth with preshared key, very simple configuration. (I want to start with a IPSec simple preshared key config, and then step up to a Certificate or kerberos...) The problem is: The connection is working but when accessing network file shares the first time it's very slow. On the same host i'm testing the shares, i have an IIS site running, and the performance seems very normal, fast. Does anybody know why does SMB shares are soo slow? Is there any ipsec policy options that should be tweaked? Thanks

    Read the article

  • Citrix on ESX 4 U1 - Slow login times

    - by thomps01
    I'm sure they'd be just as slow if not slower using ESX 3 but I'm looking for some assistance. On a physical Citrix server, logins are 1 - 4 seconds. The virtual - 16 - 23 seconds. I'm looking for performance enhancements that I can make to me VMs to try and reduce the login wait times. The hardware is fine (HP BL685 (24 cores, 64GB RAM). And there's nothing pushing it yet. Network 10Gb I'm planning to test the configuration with VMXNET3 tomorrow, but does anyone have a list a best practices I can use when testing?

    Read the article

  • vSphere 4 - how can I cancel a file copy in progress?

    - by DrStalker
    VMware vSphere 4 SAN storage with multiple data-stores No vCenter I shut-down a virtual machine and using the data-store browser did a copy/paste to copy the VM to a new datastore with additional space. The file copy performance was very poor, and due to time constraints I decided to cancel the copy task. However the copy task showing in the vsphere client can not be cancelled; the cancel option is disabled. Currently I am not able to start the machine in it's original location as the disk files are locked for the copy. How can I abort the copy? I tried deleting the target directory but this did not abort the copy task.

    Read the article

  • Linux filesystem suggestion for MySQL with a 100% SELECT workload

    - by gmemon
    I have a MySQL database that contains millions of rows per table and there are 9 tables in total. The database is fully populated, and all I am doing is reads i.e., there are no INSERTs or UPDATEs. Data is stored in MyISAM tables. Given this scenario, which linux file system would work best? Currently, I have xfs. But, I read somewhere that xfs has horrible read performance. Is that true? Should I shift the database to an ext3 file system? Thanks

    Read the article

  • Dynamic Type to do away with Reflection

    - by Rick Strahl
    The dynamic type in C# 4.0 is a welcome addition to the language. One thing I’ve been doing a lot with it is to remove explicit Reflection code that’s often necessary when you ‘dynamically’ need to walk and object hierarchy. In the past I’ve had a number of ReflectionUtils that used string based expressions to walk an object hierarchy. With the introduction of dynamic much of the ReflectionUtils code can be removed for cleaner code that runs considerably faster to boot. The old Way - Reflection Here’s a really contrived example, but assume for a second, you’d want to dynamically retrieve a Page.Request.Url.AbsoluteUrl based on a Page instance in an ASP.NET Web Page request. The strongly typed version looks like this: string path = Page.Request.Url.AbsolutePath; Now assume for a second that Page wasn’t available as a strongly typed instance and all you had was an object reference to start with and you couldn’t cast it (right I said this was contrived :-)) If you’re using raw Reflection code to retrieve this you’d end up writing 3 sets of Reflection calls using GetValue(). Here’s some internal code I use to retrieve Property values as part of ReflectionUtils: /// <summary> /// Retrieve a property value from an object dynamically. This is a simple version /// that uses Reflection calls directly. It doesn't support indexers. /// </summary> /// <param name="instance">Object to make the call on</param> /// <param name="property">Property to retrieve</param> /// <returns>Object - cast to proper type</returns> public static object GetProperty(object instance, string property) { return instance.GetType().GetProperty(property, ReflectionUtils.MemberAccess).GetValue(instance, null); } If you want more control over properties and support both fields and properties as well as array indexers a little more work is required: /// <summary> /// Parses Properties and Fields including Array and Collection references. /// Used internally for the 'Ex' Reflection methods. /// </summary> /// <param name="Parent"></param> /// <param name="Property"></param> /// <returns></returns> private static object GetPropertyInternal(object Parent, string Property) { if (Property == "this" || Property == "me") return Parent; object result = null; string pureProperty = Property; string indexes = null; bool isArrayOrCollection = false; // Deal with Array Property if (Property.IndexOf("[") > -1) { pureProperty = Property.Substring(0, Property.IndexOf("[")); indexes = Property.Substring(Property.IndexOf("[")); isArrayOrCollection = true; } // Get the member MemberInfo member = Parent.GetType().GetMember(pureProperty, ReflectionUtils.MemberAccess)[0]; if (member.MemberType == MemberTypes.Property) result = ((PropertyInfo)member).GetValue(Parent, null); else result = ((FieldInfo)member).GetValue(Parent); if (isArrayOrCollection) { indexes = indexes.Replace("[", string.Empty).Replace("]", string.Empty); if (result is Array) { int Index = -1; int.TryParse(indexes, out Index); result = CallMethod(result, "GetValue", Index); } else if (result is ICollection) { if (indexes.StartsWith("\"")) { // String Index indexes = indexes.Trim('\"'); result = CallMethod(result, "get_Item", indexes); } else { // assume numeric index int index = -1; int.TryParse(indexes, out index); result = CallMethod(result, "get_Item", index); } } } return result; } /// <summary> /// Returns a property or field value using a base object and sub members including . syntax. /// For example, you can access: oCustomer.oData.Company with (this,"oCustomer.oData.Company") /// This method also supports indexers in the Property value such as: /// Customer.DataSet.Tables["Customers"].Rows[0] /// </summary> /// <param name="Parent">Parent object to 'start' parsing from. Typically this will be the Page.</param> /// <param name="Property">The property to retrieve. Example: 'Customer.Entity.Company'</param> /// <returns></returns> public static object GetPropertyEx(object Parent, string Property) { Type type = Parent.GetType(); int at = Property.IndexOf("."); if (at < 0) { // Complex parse of the property return GetPropertyInternal(Parent, Property); } // Walk the . syntax - split into current object (Main) and further parsed objects (Subs) string main = Property.Substring(0, at); string subs = Property.Substring(at + 1); // Retrieve the next . section of the property object sub = GetPropertyInternal(Parent, main); // Now go parse the left over sections return GetPropertyEx(sub, subs); } As you can see there’s a fair bit of code involved into retrieving a property or field value reliably especially if you want to support array indexer syntax. This method is then used by a variety of routines to retrieve individual properties including one called GetPropertyEx() which can walk the dot syntax hierarchy easily. Anyway with ReflectionUtils I can  retrieve Page.Request.Url.AbsolutePath using code like this: string url = ReflectionUtils.GetPropertyEx(Page, "Request.Url.AbsolutePath") as string; This works fine, but is bulky to write and of course requires that I use my custom routines. It’s also quite slow as the code in GetPropertyEx does all sorts of string parsing to figure out which members to walk in the hierarchy. Enter dynamic – way easier! .NET 4.0’s dynamic type makes the above really easy. The following code is all that it takes: object objPage = Page; // force to object for contrivance :) dynamic page = objPage; // convert to dynamic from untyped object string scriptUrl = page.Request.Url.AbsolutePath; The dynamic type assignment in the first two lines turns the strongly typed Page object into a dynamic. The first assignment is just part of the contrived example to force the strongly typed Page reference into an untyped value to demonstrate the dynamic member access. The next line then just creates the dynamic type from the Page reference which allows you to access any public properties and methods easily. It also lets you access any child properties as dynamic types so when you look at Intellisense you’ll see something like this when typing Request.: In other words any dynamic value access on an object returns another dynamic object which is what allows the walking of the hierarchy chain. Note also that the result value doesn’t have to be explicitly cast as string in the code above – the compiler is perfectly happy without the cast in this case inferring the target type based on the type being assigned to. The dynamic conversion automatically handles the cast when making the final assignment which is nice making for natural syntnax that looks *exactly* like the fully typed syntax, but is completely dynamic. Note that you can also use indexers in the same natural syntax so the following also works on the dynamic page instance: string scriptUrl = page.Request.ServerVariables["SCRIPT_NAME"]; The dynamic type is going to make a lot of Reflection code go away as it’s simply so much nicer to be able to use natural syntax to write out code that previously required nasty Reflection syntax. Another interesting thing about the dynamic type is that it actually works considerably faster than Reflection. Check out the following methods that check performance: void Reflection() { Stopwatch stop = new Stopwatch(); stop.Start(); for (int i = 0; i < reps; i++) { // string url = ReflectionUtils.GetProperty(Page,"Title") as string;// "Request.Url.AbsolutePath") as string; string url = Page.GetType().GetProperty("Title", ReflectionUtils.MemberAccess).GetValue(Page, null) as string; } stop.Stop(); Response.Write("Reflection: " + stop.ElapsedMilliseconds.ToString()); } void Dynamic() { Stopwatch stop = new Stopwatch(); stop.Start(); dynamic page = Page; for (int i = 0; i < reps; i++) { string url = page.Title; //Request.Url.AbsolutePath; } stop.Stop(); Response.Write("Dynamic: " + stop.ElapsedMilliseconds.ToString()); } The dynamic code runs in 4-5 milliseconds while the Reflection code runs around 200+ milliseconds! There’s a bit of overhead in the first dynamic object call but subsequent calls are blazing fast and performance is actually much better than manual Reflection. Dynamic is definitely a huge win-win situation when you need dynamic access to objects at runtime.© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  

    Read the article

  • SQLAuthority News – Feature Pack for Microsoft SQL Server 2005 SP4

    - by pinaldave
    If you are still using SQL Server 2005 – I suggest that you consider migrating to later version of the SQL Server 2008/2008 R2. Due to any reason, you wanted to continue using SQL Server 2005, I suggest that you take a look at the Feature Pack for Microsoft SQL Server 2005 SP4. There are many different tools and features available in pack, which can be very handy and can solve issues. Microsoft ADOMD.NET Microsoft Core XML Services (MSXML) 6.0 Microsoft OLEDB Provider for DB2 Microsoft SQL Server Management Pack for MOM 2005 Microsoft SQL Server 2000 PivotTable Services Microsoft SQL Server 2000 DTS Designer Components Microsoft SQL Server Native Client Microsoft SQL Server 2005 Analysis Services 9.0 OLE DB Provider Microsoft SQL Server 2005 Backward Compatibility Components Microsoft SQL Server 2005 Command Line Query Utility Microsoft SQL Server 2005 Datamining Viewer Controls Microsoft SQL Server 2005 JDBC Driver Microsoft SQL Server 2005 Management Objects Collection Microsoft SQL Server 2005 Compact Edition Microsoft SQL Server 2005 Notification Services Client Components Microsoft SQL Server 2005 Upgrade Advisor Microsoft .NET Data Provider for mySAP Business Suite, Preview Version Reporting Add-In for Microsoft Visual Web Developer 2005 Express Microsoft Exception Message Box Data Mining Managed Plug-in Algorithm API for SQL Server 2005 Microsoft SQL Server 2005 Reporting Services Add-in for Microsoft SharePoint Technologies Microsoft SQL Server 2005 Data Mining Add-ins for Microsoft Office 2007 SQL Server 2005 Performance Dashboard Reports SQL Server 2005 Best Practices Analyzer Download Feature Pack for Microsoft SQL Server 2005 SP4 Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Service Pack, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – Shrinking NDF and MDF Files – Readers’ Opinion

    - by pinaldave
    Previously, I had written a blog post about SQL SERVER – Shrinking NDF and MDF Files – A Safe Operation. After that, I have written the following blog post that talks about the advantage and disadvantage of Shrinking and why one should not be Shrinking a file SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008. On this subject, SQL Server Expert Imran Mohammed left an excellent comment. I just feel that his comment is worth a big article itself. For everybody to read his wonderful explanation, I am posting this blog post here. Thanks Imran! Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. Comments from Imran - Before I explain to you the concept of Shrink Database, let us understand the concept of Database Files. When we create a new database inside the SQL Server, it is typical that SQl Server creates two physical files in the Operating System: one with .MDF Extension, and another with .LDF Extension. .MDF is called as Primary Data File. .LDF is called as Transactional Log file. If you add one or more data files to a database, the physical file that will be created in the Operating System will have an extension of .NDF, which is called as Secondary Data File; whereas, when you add one or more log files to a database, the physical file that will be created in the Operating System will have the same extension as .LDF. The questions now are, “Why does a new data file have a different extension (.NDF)?”, “Why is it called as a secondary data file?” and, “Why is .MDF file called as a primary data file?” Answers: Note: The following explanation is based on my limited knowledge of SQL Server, so experts please do comment. A data file with a .MDF extension is called a Primary Data File, and the reason behind it is that it contains Database Catalogs. Catalogs mean Meta Data. Meta Data is “Data about Data”. An example for Meta Data includes system objects that store information about other objects, except the data stored by the users. sysobjects stores information about all objects in that database. sysindexes stores information about all indexes and rows of every table in that database. syscolumns stores information about all columns that each table has in that database. sysusers stores how many users that database has. Although Meta Data stores information about other objects, it is not the transactional data that a user enters; rather, it’s a system data about the data. Because Primary Data File (.MDF) contains important information about the database, it is treated as a special file. It is given the name Primary Data file because it contains the Database Catalogs. This file is present in the Primary File Group. You can always create additional objects (Tables, indexes etc.) in the Primary data file (This file is present in the Primary File group), by mentioning that you want to create this object under the Primary File Group. Any additional data file that you add to the database will have only transactional data but no Meta Data, so that’s why it is called as the Secondary Data File. It is given the extension name .NDF so that the user can easily identify whether a specific data file is a Primary Data File or a Secondary Data File(s). There are many advantages of storing data in different files that are under different file groups. You can put your read only in the tables in one file (file group) and read-write tables in another file (file group) and take a backup of only the file group that has read the write data, so that you can avoid taking the backup of a read-only data that cannot be altered. Creating additional files in different physical hard disks also improves I/O performance. A real-time scenario where we use Files could be this one: Let’s say you have created a database called MYDB in the D-Drive which has a 50 GB space. You also have 1 Database File (.MDF) and 1 Log File on D-Drive and suppose that all of that 50 GB space has been used up and you do not have any free space left but you still want to add an additional space to the database. One easy option would be to add one more physical hard disk to the server, add new data file to MYDB database and create this new data file in a new hard disk then move some of the objects from one file to another, and put the file group under which you added new file as default File group, so that any new object that is created gets into the new files, unless specified. Now that we got a basic idea of what data files are, what type of data they store and why they are named the way they are, let’s move on to the next topic, Shrinking. First of all, I disagree with the Microsoft terminology for naming this feature as “Shrinking”. Shrinking, in regular terms, means to reduce the size of a file by means of compressing it. BUT in SQL Server, Shrinking DOES NOT mean compressing. Shrinking in SQL Server means to remove an empty space from database files and release the empty space either to the Operating System or to SQL Server. Let’s examine this through an example. Let’s say you have a database “MYDB” with a size of 50 GB that has a free space of about 20 GB, which means 30GB in the database is filled with data and the 20 GB of space is free in the database because it is not currently utilized by the SQL Server (Database); it is reserved and not yet in use. If you choose to shrink the database and to release an empty space to Operating System, and MIND YOU, you can only shrink the database size to 30 GB (in our example). You cannot shrink the database to a size less than what is filled with data. So, if you have a database that is full and has no empty space in the data file and log file (you don’t have an extra disk space to set Auto growth option ON), YOU CANNOT issue the SHRINK Database/File command, because of two reasons: There is no empty space to be released because the Shrink command does not compress the database; it only removes the empty space from the database files and there is no empty space. Remember, the Shrink command is a logged operation. When we perform the Shrink operation, this information is logged in the log file. If there is no empty space in the log file, SQL Server cannot write to the log file and you cannot shrink a database. Now answering your questions: (1) Q: What are the USEDPAGES & ESTIMATEDPAGES that appear on the Results Pane after using the DBCC SHRINKDATABASE (NorthWind, 10) ? A: According to Books Online (For SQL Server 2000): UsedPages: the number of 8-KB pages currently used by the file. EstimatedPages: the number of 8-KB pages that SQL Server estimates the file could be shrunk down to. Important Note: Before asking any question, make sure you go through Books Online or search on the Google once. The reasons for doing so have many advantages: 1. If someone else already has had this question before, chances that it is already answered are more than 50 %. 2. This reduces your waiting time for the answer. (2) Q: What is the difference between Shrinking the Database using DBCC command like the one above & shrinking it from the Enterprise Manager Console by Right-Clicking the database, going to TASKS & then selecting SHRINK Option, on a SQL Server 2000 environment? A: As far as my knowledge goes, there is no difference, both will work the same way, one advantage of using this command from query analyzer is, your console won’t be freezed. You can do perform your regular activities using Enterprise Manager. (3) Q: What is this .NDF file that is discussed above? I have never heard of it. What is it used for? Is it used by end-users, DBAs or the SERVER/SYSTEM itself? A: .NDF File is a secondary data file. You never heard of it because when database is created, SQL Server creates database by default with only 1 data file (.MDF) and 1 log file (.LDF) or however your model database has been setup, because a model database is a template used every time you create a new database using the CREATE DATABASE Command. Unless you have added an extra data file, you will not see it. This file is used by the SQL Server to store data which are saved by the users. Hope this information helps. I would like to as the experts to please comment if what I understand is not what the Microsoft guys meant. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SDL2 sprite batching and texture atlases

    - by jms
    I have been programming a 2D game in C++, using the SDL2 graphics API for rendering. My game concept currently features effects that could result in even tens of thousands of sprites being drawn simultaneously to the screen. I'd like to know what can be done for increasing rendering efficiency if the need arises, preferably using the SDL2 API only. I have previously given a quick look at OpenGL-based 2D rendering, and noticed that SDL2 lacks a command like int SDL_RenderCopyMulti(SDL_Renderer* renderer, SDL_Texture* texture, const SDL_Rect* srcrects, SDL_Rect* dstrects, int count) Which would permit SDL to benefit from two common techniques used for efficient 2D graphics: Texture batching: Sorting sprites by the texture used, and then simultaneously rendering as many sprites that use the same texture as possible, changing only the source area on the texture and the destination area on the render target between sprites. This allows the encapsulation of the whole operation in a single GPU command, reducing the overhead drastically from multiple distinct calls. Texture atlases: Instead of creating one texture for each frame of each animation of each sprite, combining multiple animations and even multiple sprites into a single large texture. This lessens the impact of changing the current texture when switching between sprites, as the correct texture is often ready to be used from the previous draw call. Furthemore the GPU is optimized for handling large textures, in contrast to the many tiny textures typically used for sprites. My question: Would SDL2 still get somewhat faster from any rudimentary sprite sorting or from combining multiple images into one texture thanks to automatic video driver optimizations? If I will encounter performance issues related to 2D rendering in the future, will I be forced to switch to OpenGL for lower level control over the GPU? Edit: Are there any plans to include such functionality in the near future?

    Read the article

< Previous Page | 794 795 796 797 798 799 800 801 802 803 804 805  | Next Page >