Search Results

Search found 40567 results on 1623 pages for 'database performance'.

Page 9/1623 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Performance Tuning and Query Optimisation–SQLBits Training Day

    - by simonsabin
    I will be doing a training day at SQLbits in April on Performance Tuning and Query Optimisation. This is the outline for the day. Its going to be an intense day, I look forward to seeing you there. To register go to http://www. sqlbits .com/information/registration.aspx . Places are limited so make sure you register soon. Outline of the day. Most database performance issues are due to a combination of bad queries, bad database design or poor indexing. All of them are related to each other. In this...(read more)

    Read the article

  • How can an SQL relational database be used to model a thesaurus? [closed]

    - by Miles O'Keefe
    I would like to design a web app that functions as a simple thesaurus: a long list of words with attributes, all of which are linked to each other. This thesaurus data model can be defined as: a controlled vocabulary arranged in a known order in which equivalence, hierarchical, and associative relationships among terms are clearly displayed and identified by standardized relationship indicators. My idea so far is to have one database in which every word is a table, and every table contains all words related to that word. e.g. Thesaurus(database) - happy(table) - excited(row)|cheerful(row)|lively(row) Is there are more efficient way to store words and their relationship to other words in a relational SQL database?

    Read the article

  • Low cost way to host a large table yet keep the performance scalable?

    - by Leo Liang
    I have a growing table storing time series data, 500M entries now, and 200K new records every day. The total size is around 15GB for now. My clients are querying the table via a PHP script mostly, and the size of the result set is around 10K records (not very large). select * from T where timestamp > X and timestamp < Y and additionFilters And I want this operation cheap. Currently my table is hosting in Postgres 7, on a single 16G memory Box, and I would love to see some good suggestion for me to host this in low cost and also allow me to scale up for performance if needed. The table serves: 1. Query: 90% 2. Insert: 9.9% 2. Update: 0.1% <-- very rare.

    Read the article

  • Optimal disk partitions for database setup (15 Drives)

    - by Jason
    We are setting up a new database system and have 15 drives to play with (+2 on-board for the OS). With a total of 15 drives would it be better to setup all 14 as one RAID-10 block (+1 hot spare) OR split into two RAID-10 sets one for Data (8 disks) and one for logs/backups (6 disks). My question boils down to the following: is there a specific point where having more drives in a RAID-10 setup will out preform having the drives broken into smaller RAID-10 sets.

    Read the article

  • Database – Beginning with Cloud Database As A Service

    - by Pinal Dave
    I love my weekend projects. Everybody does different activities in their weekend – like traveling, reading or just nothing. Every weekend I try to do something creative and different in the database world. The goal is I learn something new and if I enjoy my learning experience I share with the world. This weekend, I decided to explore Cloud Database As A Service – Morpheus. In my career I have managed many databases in the cloud and I have good experience in managing them. I should highlight that today’s applications use multiple databases from SQL for transactions and analytics, NoSQL for documents, In-Memory for caching to Indexing for search.  Provisioning and deploying these databases often require extensive expertise and time.  Often these databases are also not deployed on the same infrastructure and can create unnecessary latency between the application layer and the databases.  Not to mention the different quality of service based on the infrastructure and the service provider where they are deployed. Moreover, there are additional problems that I have experienced with traditional database setup when hosted in the cloud: Database provisioning & orchestration Slow speed due to hardware issues Poor Monitoring Tools High network latency Now if you have a great software and expert network engineer, you can continuously work on above problems and overcome them. However, not every organization have the luxury to have top notch experts in the field. Now above issues are related to infrastructure, but there are a few more problems which are related to software/application as well. Here are the top three things which can be problems if you do not have application expert: Replication and Clustering Simple provisioning of the hard drive space Automatic Sharding Well, Morpheus looks like a product build by experts who have faced similar situation in the past. The product pretty much addresses all the pain points of developers and database administrators. What is different about Morpheus is that it offers a variety of databases from MySQL, MongoDB, ElasticSearch to Reddis as a service.  Thus users can pick and chose any combination of these databases.  All of them can be provisioned in a matter of minutes with a simple and intuitive point and click user interface.  The Morpheus cloud is built on Solid State Drives (SSD) and is designed for high-speed database transactions.  In addition it offers a direct link to Amazon Web Services to minimize latency between the application layer and the databases. Here are the few steps on how one can get started with Morpheus. Follow along with me.  First go to http://www.gomorpheus.com and register for a new and free account. Step 1: Signup It is very simple to signup for Morpheus. Step 2: Select your database   I use MySQL for my daily routine, so I have selected MySQL. Upon clicking on the big red button to add Instance, it prompted a dialogue of creating a new instance.   Step 3: Create User Now we just have to create a user in our portal which we will use to connect to a database hosted at Morpheus. Click on your database instance and it will bring you to User Screen. Over here you will notice once again a big red button to create a new user. I created a user with my first name.   Step 4: Configure your MySQL client I used MySQL workbench and connected to MySQL instance, which I had created with an IP address and user.   That’s it! You are connecting to MySQL instance. Now you can create your objects just like you would create on your local box. You will have all the features of the Morpheus when you are working with your database. Dashboard While working with Morpheus, I was most impressed with its dashboard. In future blog posts, I will write more about this feature.  Also with Morpheus you use the same process for provisioning and connecting with other databases: MongoDB, ElasticSearch and Reddis. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Should database-models (conceptual or physical) be reviewed by DBAs?

    - by user61852
    Where I work, new applications that are being developed that will use their own relational database, must have their database-models (conceptual, then physical ) reviewed and aproved by DBAs. Things looked after are normalization, antipatterns, table and column naming standards, etc. Is this really a DBA's responsability to do this ? or should it be, in a greater extend, the responsability of app designers and architects ?

    Read the article

  • What would a database look like if it were normalized to be completely abstracted? lets call it Max(n) normal form

    - by Doug Chamberlain
    edit: By simplest form i was not implying that it would be easy to understand. For instance, developing in low level assembly language is the simplest way to can develop code, but it is far from the easiest. Essentially, what I am asking is in math you can simplify a fraction to a point where it can no longer be simplfied. Can the same be true for a database and what would a database look like in its simplest, form?

    Read the article

  • EF4 performance tips and tricks

    - by Will
    I've gotten to that point in one of my projects, and haven't found much information out there. So if you've got some pointers for improving performance in the new Entity Framework 4, please let us know!

    Read the article

  • SQL 2008 Database tuning advisor won’t start

    - by Andrew Hancox
    For some reason I can't get DTA to connect to my development machine. It connects to a remote DB just fine but when I point it to my dev machine I get an error saying: Failed to initialize MSDB database for tuning (exit code: -1073741819). I'm pretty sure it's not a permissions issue since I've used profiler to capture what it's doing and all of the commands it's run so far look fine and are being run under my account which is associated with the sysadmin role, when I run them in sql management studio they go through fine. I'm pretty convinced that the problem is related to creating the objects in MSDB that are used by DTA but I tried creating these manually (I found scripts on the web) and it just seems to push the problem along the line slightly. I'm going out of my mind - have even tried reinstalling SQL but that's not fixed it. I'm using SQL 2008 with SP1 (10.0.2531) on windows server 2008 (patched up to date). SAVE ME!!!!!

    Read the article

  • PostgreSQL 9.1 Database Replication Between Two Production Environments with Load Balancer

    - by littleK
    I'm investigating different solutions for database replication between two PostgreSQL 9.1 databases. The setup will include two production servers on the cloud (Amazon EC2 X-Large Instances), with an elastic load balancer. What is the typical database implementation for for this type of setup? A master-master replication (with Bucardo or rubyrep)? Or perhaps use only one shared database between the two environments, with a shared disk failover? I've been getting some ideas from http://www.postgresql.org/docs/9.0/static/different-replication-solutions.html. Since I don't have a lot of experience in database replication, I figured I would ask the experts. What would you recommend for the described setup?

    Read the article

  • VB.Net IO performance

    - by CFP
    Having read this page, I can't believe that VB.Net has such a terrible performance when it comes to I/O. Is this still true today? How does the .Net Framework 2.0 perform in terms of I/O (taht's the version I'm targeting)?

    Read the article

  • How should I copy the "mysql" database to my new server using PHPMyAdmin

    - by undefined
    My new webhosting company has set up a MySQL database for me and it has the tables MySQL and Information_schema already there. I want to copy my existing database from another server (a) to the new one (b). I assume I need to overwrite the 'mysql' database on server (b) with the one from my existing server (a) or atleast copy over the permissions. 1) What information does the mysql database hold? users and permissions I can see, does it have the login info for phpMyAdmin? I dont want to overwrite that obviously. 2) Should I drop the table on server (b) and import my original? 3) Should I just copy the users table? 4) Do I need to worry about the information_schema table? should I copy this over too? thanks

    Read the article

  • SQLAuthority News – Guest Post – Performance Counters Gathering using Powershell

    - by pinaldave
    Laerte Junior Laerte Junior has previously helped me personally to resolve the issue with Powershell installation on my computer. He did awesome job to help. He has send this another wonderful article regarding performance counter for readers of this blog. I really liked it and I expect all of you who are Powershell geeks, you will like the same as well. As a good DBA, you know that our social life is restricted to a few movies over the year and, when possible, a pizza in a restaurant next to your company’s place, of course. So what we have to do is to create methods through which we can facilitate our daily processes to go home early, and eventually have a nice time with our family (and not sleeping on the couch). As a consultant or fixed employee, one of our daily tasks is to monitor performance counters using Perfmom. To be honest, IDE is getting more complicated. To deal with this, I thought a solution using Powershell. Yes, with some lines of Powershell, you can configure which counters to use. And with one more line, you can already start collecting data. Let’s see one scenario: You are a consultant who has several clients and has just closed another project in troubleshooting an SQL Server environment. You are to use Perfmom to collect data from the server and you already have its XML configuration files made with the counters that you will be using- a file for memory bottleneck f, one for CPU, etc. With one Powershell command line for each XML file, you start collecting. The output of such a TXT file collection is set to up in an SQL Server. With two lines of command for each XML, you make the whole process of data collection. Creating an XML configuration File to Memory Counters: Get-PerfCounterCategory -CategoryName "Memory" | Get-PerfCounterInstance  | Get-PerfCounterCounters |Save-ConfigPerfCounter -PathConfigFile "c:\temp\ConfigfileMemory.xml" -newfile Creating an XML Configuration File to Buffer Manager, counters Page lookups/sec, Page reads/sec, Page writes/sec, Page life expectancy: Get-PerfCounterCategory -CategoryName "SQLServer:Buffer Manager" | Get-PerfCounterInstance | Get-PerfCounterCounters -CounterName "Page*" | Save-ConfigPerfCounter -PathConfigFile "c:\temp\BufferManager.xml" –NewFile Then you start the collection: Set-CollectPerfCounter -DateTimeStart "05/24/2010 08:00:00" -DateTimeEnd "05/24/2010 22:00:00" -Interval 10 -PathConfigFile c:\temp\ConfigfileMemory.xml -PathOutputFile c:\temp\ConfigfileMemory.txt To let the Buffer Manager collect, you need one more counters, including the Buffer cache hit ratio. Just add a new counter to BufferManager.xml, omitting the new file parameter Get-PerfCounterCategory -CategoryName "SQLServer:Buffer Manager" | Get-PerfCounterInstance | Get-PerfCounterCounters -CounterName "Buffer cache hit ratio" | Save-ConfigPerfCounter -PathConfigFile "c:\temp\BufferManager.xml" And start the collection: Set-CollectPerfCounter -DateTimeStart "05/24/2010 08:00:00" -DateTimeEnd "05/24/2010 22:00:00" -Interval 10 -PathConfigFile c:\temp\BufferManager.xml -PathOutputFile c:\temp\BufferManager.txt You do not know which counters are in the Category Buffer Manager? Simple! Get-PerfCounterCategory -CategoryName "SQLServer:Buffer Manager" | Get-PerfCounterInstance | Get-PerfCounterCounters Let’s see one output file as shown below. It is ready to bulk insert into the SQL Server. As you can see, Powershell makes this process incredibly easy and fast. Do you want to see more examples? Visit my blog at Shell Your Experience You can find more about Laerte Junior over here: www.laertejuniordba.spaces.live.com www.simple-talk.com/author/laerte-junior www.twitter.com/laertejuniordba SQL Server Powershell Extension Team: http://sqlpsx.codeplex.com/ Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Add-On, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: Powershell

    Read the article

  • Performance and Optimization Isn’t Evil

    - by Reed
    Donald Knuth is a fairly amazing guy.  I consider him one of the most influential contributors to computer science of all time.  Unfortunately, most of the time I hear his name, I cringe.  This is because it’s typically somebody quoting a small portion of one of his famous statements on optimization: “premature optimization is the root of all evil.” I mention that this is only a portion of the entire quote, and, as such, I feel that Knuth is being quoted out of context.  Optimization is important.  It is a critical part of every software development effort, and should never be ignored.  A developer who ignores optimization is not a professional.  Every developer should understand optimization – know what to optimize, when to optimize it, and how to think about code in a way that is intelligent and productive from day one. I want to start by discussing my own, personal motivation here.  I recently wrote about a performance issue I ran across, and was slammed by multiple comments and emails that effectively boiled down to: “You’re an idiot.  Premature optimization is the root of all evil.  This doesn’t matter.”  It didn’t matter that I discovered this while measuring in a profiler, and that it was a portion of my code base that can take “many hours to complete.”  Even so, multiple people instantly jump to “it’s premature – it doesn’t matter.” This is a common thread I see.  For example, StackOverflow has many pages of posts with answers that boil down to (mis)quoting Knuth.  In fact, just about any question relating to a performance related issue gets this quote thrown at it immediately – whether it deserves it or not.  That being said, I did receive some positive comments and emails as well.  Many people want to understand how to optimize their code, approaches to take, tools and techniques they can use, and any other advice they can discover. First, lets get back to Knuth – I mentioned before that Knuth is being quoted out of context.  Lets start by looking at the entire quote from his 1974 paper Structured Programming with go to Statements: “We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.” Ironically, if you read Knuth’s original paper, this statement was made in the middle of a discussion of how Knuth himself had changed how he approaches optimization.  It was never a statement saying “don’t optimize”, but rather, “optimizing intelligently provides huge advantages.”  His approach had three benefits: “a) it doesn’t take long” … “b) the payoff is real”, c) you can “be less efficient in the other parts of my programs, which therefore are more readable and more easily written and debugged.” Looking at Knuth’s premise here, and reading that section of his paper, really leads to a few observations: Optimization is important  “he will be wise to look carefully at the critical code” Normally, 3% of your code – three lines out of every 100 you write, are “critical code” and will require some optimization: “we should not pass up our opportunities in that critical 3%” Optimization, if done well, should not be time consuming: “it doesn’t take long” Optimization, if done correctly, provides real benefits: “the payoff is real” None of this is new information.  People who care about optimization have been discussing this for years – for example, Rico Mariani’s Designing For Performance (a fantastic article) discusses many of the same issues very intelligently. That being said, many developers seem unable or unwilling to consider optimization.  Many others don’t seem to know where to start.  As such, I’m going to spend some time writing about optimization – what is it, how should we think about it, and what can we do to improve our own code.

    Read the article

  • SQLAuthority News – Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning Training

    - by pinaldave
    Last 3 days to register for the courses. This is one time offer with big discount. The deadline for the course registration is 5th May, 2010. There are two different courses are offered by Solid Quality Mentors 1) Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning – Pinal Dave Date: May 12-14, 2010 Price: Rs. 14,000/person for 3 days Discount Code: ‘SQLAuthority.com’ Effective Price: Rs. 11,000/person for 3 days 2) SharePoint 2010 – Joy Rathnayake Date: May 10-11, 2010 Price: Rs. 11,000/person for 3 days Discount Code: ‘SQLAuthority.com’ Effective Price: Rs. 8,000/person for 2 days Download the complete PDF brochure. To register, either send an email to [email protected] or call +91 95940 43399. Feel free to drop me an email at pinal “at” SQLAuthority.com for any additional information and clarification. Training Venue: Abridge Solutions, #90/B/C/3/1, Ganesh GHR & MSY Plaza, Vittalrao Nagar, Near Image Hospital, Madhapur, Hyderabad – 500 081. Additionally there is special program of SolidQ India Insider. This is only available to first few registrants of the courses only. Read more details about the course here. Read my TechEd India 2010 experience here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • SQLAuthority News – SQL Server Performance Series Hyderabad / Pune – Nov/Dec 2010

    - by pinaldave
    Just a quick note that SQL Server Performance Tuning and Optimizations Seminar series which I am offering at Hyderabad and Pune are almost all sold out. Read the details of the earlier successful seminar conducted at Colombo, Sri Lanka over here. Hyderabad Nov 27-28, 2010 (Last 3 Seats Left) Best Western Amrutha Castle 5-9-16, Opp. Secretriat, Saifabad, Khairatabad Hyderabad, Andhra Pradesh Pune Dec 04-05, 2010 (Last 6 Seats Left) Location TBA as we are looking for larger capacity room. I promise that this is going to be great fun as this sessions are very different then any usual sessions you have ever attended. This sessions are absolutely interactive and all the attendees will feel part of the event. As larger group are not convenient we are limited this seminars to very small group of people. This way attendees can go to instructors any time and feel connected. This 2-day seminar will cover the best of the best concepts and practices from popular courses offered by Solid Quality Mentors. Instead of learning theory only, the seminar focuses on providing real world experience by using demos and scenarios derived from customer engagements. The seminar is uniquely structured and well-thought-out. Sessions are discussion- based and are designed to be an interactive gateway between the instructor and the participants for an optimal learning experience. The seminar is intended to be immersion-based where participants will have plenty of opportunities to get deeply involved in the concepts presented by the instructor. Agenda of the event To join the seminars drop me an email. My email address is pinal “at” SQLAuthority.com and IndiaInfo “at” SolidQ.com. If you specify SQLAuthority.com in Title, you will avail special discount in overall rates on specified price. Yes, a sure 20% I promise. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • June 22-24, 2010 in London City Level 400 SQL Server Performance Monitoring & Tuning Workshop

    - by sqlworkshops
    We are organizing the “3 Day Level 400 SQL Server Performance Monitoring & Tuning Workshop” for the 1st time in London City during June 22-24, 2010.Agenda is located @ www.sqlworkshops.com/workshops & you can register @ www.sqlworkshops.com/ruk. Charges: £ 1800 (5% discount for those who register before 21st May, £ 1710).In this 3 Day Level 400 hands-on workshop, unlike short SQLBits sessions, we go deeper on the tuning topics. Not sure if this will be a good use of your time & money? Watch our webcasts @ www.sqlworkshops.com/webcasts.We are trying to balance these commercial offerings with our free community contributions. Financially: These workshops are essential for us to stay in business!Feedback from Finland workshop posted by Jukka, Wärtsilä Oyj on February 23, 2010 to the LinkedIn SQL Server User Group Finland (more feedbacks @ www.sqlworkshops.com/feedbacks):Just want to start this thread and give some feedback on the Workshop that I attended last week at Microsoft.Three days in a row, deep dive into the query optimization and performance monitoring :-) I must say, that the SQL guru Ramesh has all the tricks up in his sleeves.The workshop was very helpful and what's most important: no slide show marathon: samples after samples explained very clearly and with our own class room SQL servers we can try the same stuff while Ramesh typed his own samples.If the workshop will be rearranged, I can most willingly recommend it to anyone who wants to know what's "under the hood" of SQL Server 2008.Once again, thank you Microsoft and Ramesh to make this happen. May the force be with us all :-)Hope to see you @ the Workshop. Feel free to pass on this information to your SQL Server colleagues.-ramesh-www.sqlbits.com/speakers/r_meyyappan/default.aspx

    Read the article

  • Poor System performance on my machine running Ubuntu 12.04(Beta 2 updated to the present moment)

    - by Mohammad Kamil Nadeem
    Why is it that my system dies when multitasking(it is happening from 11.10) on Ubuntu11.10(Unity), Kubuntu 11.10(KDE) and Deepin Linux which is based on 11.10(Gnome-Shell) The thing is that I thought with 12.04 I would get performance like I used to get on 11.04 on which everything used to run fine without any lag or hiccups. The same lagging(Browser starts to stutter, increased delay in the launching of dash and applications)is happening on 12.04 http://i.imgur.com/YChKB.png and http://i.imgur.com/uyXLA.png . I believe that my system configuration is sufficient for running Ubuntu as you can check here http://paste.ubuntu.com/929734/ . I had the Google voice and chat plugin installed on 12.04 so someone suggested that I should remove that and see if the performance improves but no such respite(I am having this on multiple operating system based on Ubuntu 11.10 as I have mentioned above). On a friends suggestion Ran Memory Test through Partition Magic and my system passed that fine. One thing more that I would like to know is that why when I have 2Gb Ram and 2.1GB swap does my system starts lag and run poorly when Ram consumption goes 500+. If you require anymore information I will gladly provide it.

    Read the article

  • Improving the performance of a db import process

    - by mmr
    I have a program in Microsoft Access that processes text and also inserts data in MySQL database. This operation takes 30 mins or less to finished. I translated it into VB.NET and it takes 2 hours to finish. The program goes like this: A text file contains individual swipe from a corresponding person, it contains their id, time and date of swipe in the machine, and an indicator if it is a time-in or a time-out. I process this text, segregate the information and insert the time-in and time-out per row. I also check if there are double occurrences in the database. After checking, I simply merge the time-in and time-out of the corresponding person into one row only. This process takes 2 hours to finished in VB.NET considering I have a table to compare which contains 600,000+ rows. Now, I read in the internet that python is best in text processing, i already have a test but i doubt in database operation. What do you think is the best programming language for this kind of problem? How can I speed up the process? My first idea was using python instead of VB.NET, but since people here telling me here on SO that this most probably won't help I am searching for different solutions.

    Read the article

  • Polygons vs sprites rendering performance in Unity for windows phone 8

    - by Géry Arduino
    I'm currently building a windows phone 8 game with unity, having 111 (no more no less) sprites being updated each frames. I have a strong overhead in the profiler (70% to 90% minimum) I tried the following to get higher frame rate, I'm running it with minimum quality settings, I tried disabling and enabling V-Sync Finally I managedto get 60Fps, but I still have large overhead. I believe I should have more than 60Fps for such few amount. Moreover, I still have to implement the game logic over this so I'd like some room in my FPS to be able to work. I was wondering if it would be better in terms of performance to use polygons instead of sprites? As sprites are quite new in Unity, (that would give me around 222 triangles). Did someone tried to check the performance differences between sprites and actual mesh renderes in Unity when it comes to phones? If so what could be the best option in that case? FYI : I'm using the Windows Phone 8 emulator on Visual studio, I have a compliant computer for that so it should normally reflect the behavior of a real phone (expecting some differences but still...) EDIT : To clarify my question i wonder what is the most efficient in windows phone 8 : Sprites or Mesh renderers?

    Read the article

  • Performance Overhead of Encrypted /home

    - by SabreWolfy
    I have a netbook with Windows on the second partition and Xubuntu (/ and /home) on the third partition. I selected to encrypt my home folder during installation. The performance of the netbook is adequate for the small machine that it is, but I'm looking to improve performance. I could not find much information about the overhead (CPU or drive) associated with home partition encryption. I ran the following, writing to my home partition as well as the the mounted Windows partition: dd if=/dev/zero of=~/dummy bs=512 count=10240 dd if=/dev/zero of=/media/Windows/dummy bs=512 count=10240 The first returned 2.4MB/s and the second returned 2.5MB/s. Can I therefore deduce that there is very little overhead to home folder encryption? I'm not sure if the different filesystems will make any difference (/ and /home are ext3). Update 1 I don't know why I didn't use /tmp instead of the mounted Windows folder. Only /home is encrypted, so /tmp is unencrypted ext3. The results of the dd as above are astounding: ~: 2.4 MB/s /tmp: 42.6 MB/s Comments please? The reason I am asking this is that disk access on the netbook is noticeably slow. Update 2 I timed each of the dd operations with time: ~: real 0m2.217s user 0m0.028s sys 0m2.176s /tmp: real 0m0.152s user 0m0.012s sys 0m0.136s See also: discussion on UbuntuForums.org and bug report Edit: Output of mount: /dev/sda3 on / type ext3 (rw,noatime,errors=remount-ro,user_xattr,commit=600) proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/USER/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=USER) `

    Read the article

  • How to improve batching performance

    - by user4241
    Hello, I am developing a sprite based 2D game for mobile platform(s) and I'm using OpenGL (well, actually Irrlicht) to render graphics. First I implemented sprite rendering in a simple way: every game object is rendered as a quad with its own GPU draw call, meaning that if I had 200 game objects, I made 200 draw calls per frame. Of course this was a bad choice and my game was completely CPU bound because there is a little CPU overhead assosiacted in every GPU draw call. GPU stayed idle most of the time. Now, I thought I could improve performance by collecting objects into large batches and rendering these batches with only a few draw calls. I implemented batching (so that every game object sharing the same texture is rendered in same batch) and thought that my problems are gone... only to find out that my frame rate was even lower than before. Why? Well, I have 200 (or more) game objects, and they are updated 60 times per second. Every frame I have to recalculate new position (translation and rotation) for vertices in CPU (GPU on mobile platforms does not support instancing so I can't do it there), and doing this calculation 48000 per second (200*60*4 since every sprite has 4 vertices) simply seems to be too slow. What I could do to improve performance? All game objects are moving/rotating (almost) every frame so I really have to recalculate vertex positions. Only optimization that I could think of is a look-up table for rotations so that I wouldn't have to calculate them. Would point sprites help? Any nasty hacks? Anything else? Thanks.

    Read the article

  • How to check system performance?

    - by Woltan
    Hi all, I am a new Ubuntu user and really like the look and the features of the OS. However, I have a feeling that the performance could be better. With that I mean: Somehow the scrolling within firefox of sites seems laggy. I do not know how I should measure it but there is a difference. Not that it is unusable but it is aggravating. Java programs are running really slow. As a comparison (I know it is not a fair one), I tried to run a game using wine. The graphic specifications using windows were much higher (1600x1200) with a high level of detail, and in ubuntu with the lowest level of detail 1024x768 was the maximum. (My graphics card is a GeForce GTS 450 with 1gb RAM) Coming to my question: Is there a way to measure the performance of 3D acceleration, java applets, firefox scrolling etc. with a tool and compare it with lets say a windows OS or other users having almost the same hardware? Maybe it is a setup issue where some fundamental drivers are missing or something!? Any help, link, suggestion is appreciated! Cherio Woltan

    Read the article

  • Demantra Performance Clustering Factor Out of Order Ratio TABLE_REORG CHECK_REORG (Doc ID 1594372.1)

    - by user702295
    Hello!   There is a new document available: Demantra Performance Clustering Factor Out of Order Ratio TABLE_REORG CHECK_REORG (Doc ID 1594372.1) Demantra Performance Clustering Factor Out of Order Ratio TABLE_REORG CHECK_REORG The table reorganization can be setup to automatically run in version 7.3.1.5.  In version 12.2.2 we run the TABLE_REORG.CHECK_REORG function at every appserver restart. If the function recommends a reorg then we strongly encourage to reorg the database object.  This is documented in the official docs. In versions 7.3.1.3 and 7.3.1.4, the TABLE_REORG module exists and can be used. It has two main functions that are documented in the Implementation Guide Supplement, Release 7.3, Part No. E26760-03, chapter 4. In short, if you are using version 7.3.1.3 or higher, you can check for the need to run a reorg by doing the following 2 steps: 1. Run TABLE_REORG.CHECK_REORG('T'); 2. Check the table LOG_TABLE_REORG for recommendations If you are on a version before 7.3.1.3, you will need to follow the instructions below to determine if you need to do a manual reorg. How to determine if a table reorg is needed 1. It is strongly encouraged by DEV that You gather statistics on the required table.  The prefered percentage for the gather is 100%. 2. Run the following SQL to evaluate how table reorg might affect Primary Key (PK) based access:   SELECT ui.index_name,trunc((ut.num_rows/ui.clustering_factor)/(ut.num_rows/ut.blocks),2) FROM user_indexes ui, user_tables ut, user_constraints uc WHERE ui.table_name=ut.table_name AND ut.table_name=uc.table_name AND ui.index_name=uc.index_name AND UC.CONSTRAINT_TYPE='P' AND ut.table_name=upper('&enter_table_name');   3. Based on the result: VALUE ABOVE 0.75 - DOES NOT REQUIRE REORG VALUE BETWEEN 0.5 AND 0.75 - REORG IS RECOMMENDED VALUE LOWER THAN 0.5 - IT IS HIGHLY RECOMMENDED TO REORG

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >