Search Results

Search found 28685 results on 1148 pages for 'query performance'.

Page 462/1148 | < Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >

  • How much configurability to give to users regarding concurrency?

    - by rwong
    This question is a narrowing-down of these related questions: How much effort should we spend to programming for multiple cores? Concurrency: How do you approach the design and debug the implementation? Given that each user's computers may have different performance characteristics with respect to calculations, memory, disk I/O bandwidth and network I/O bandwidth, and that it is difficult to implement an automated self-tuning system in your software, how much configurability should we give to the end-users so that they can find ways (by trial-and-error?) to improve our software's efficiency? If we give users the ability to change these settings, how do we give visual feedback to users so they can measure the performance changes?

    Read the article

  • Why did this command make my Ubuntu more awesome than before?

    - by the_misfit
    After running Ubuntu for months and liking it, I for the first time wanted to use headphones and was trying to get them to work so I ran the code in step 1 https://help.ubuntu.com/community/SoundTroubleshootingProcedure -- the result was an improvement in performance, resolution, and audio.. a vast improvement. What does that suggest about what was wrong with my settings before if running this improved system performance, graphics, and my audio problem? The command is: sudo add-apt-repository ppa:ubuntu-audio-dev/ppa; sudo apt-get update;sudo apt-get dist-upgrade; sudo apt-get install linux-sound-base alsa-base alsa-utils gdm ubuntu-desktop linux-image-`uname -r` libasound2; sudo apt-get -y --reinstall install linux-sound-base alsa-base alsa-utils gdm ubuntu-desktop linux-image-`uname -r` libasound2; killall pulseaudio; rm -r ~/.pulse*; sudo usermod -aG `cat /etc/group | grep -e '^pulse:' -e '^audio:' -e '^pulse-access:' -e '^pulse-rt:' -e '^video:' | awk -F: '{print $1}' | tr '\n' ',' | sed 's:,$::g'` `whoami`

    Read the article

  • System user authentication via web interface [closed]

    - by donodarazao
    Background: We have one pretty slow and expensive satellite Internet connection that is shared in a network with 5-50 users. To limit traffic, users shall pay a certain sum of money per hour. Routing and traffic accounting on user basis is done by a opensuse 10.3 server. Login is done via pppoe, and for each connection, username, bytes_sent, bytes_rcvd, start_time, end_time,etc are written into a mysql database. Now it was decided that we want to change from time-based to volume-based pricing. As the original developer who installed the system a couple of years ago isn't available, I'm trying to do the changes. Although I'm absolutely new to all this, there is some progress. However, there's one point I'm absolutely stuck. Up to now, only administrators can access connection details and billing information via a web interface. But as volume-based prices are less transparent to users than time-based prices, it is essential that users themselves can check their connections and how much they cost via the web interface. For this, we need some kind of user authentication. Actual question: How to develop such a user authentication? Every user has a linux system user account. With this user name and password, connection to the pppoe-server is made by the client machines. I thought about two possibles ways to authenticate users: First possibility: Users type username and password in a form. This is then somehow checked. We already have to possibilities to change passwords via the web interface. Here are parts of the code: Part of the Perl script the homepage is linked to: #!/usr/bin/perl use CGI; use CGI::Carp qw(fatalsToBrowser); use lib '../lib'; use own_perl_module; my @error; my $data; $query = new CGI; $username = $query->param('username') || ''; $oldpasswd = $query->param('oldpasswd') || ''; $passwd = $query->param('passwd') || ''; $passwd2 = $query->param('passwd2') || ''; own_perl_module::connect(); if ($query->param('submit')) { my $benutzer = own_perl_module::select_benutzer(username => $username) or push @error, "user not exists"; push @error, "your password?!?" unless $passwd; unless (@error) { own_perl_module::update_benutzer($benutzer->{id}, { oldpasswd => $oldpasswd, passwd => $passwd, passwd2 => $passwd2 }, error => \@error) and push @error, "Password changed."; } } Here's part of the sub update_benutzer in the own_perl_module: if ($dat-{passwd} ne '') { my $username = $dat-{username} || $select-{username}; my $system = "./chpasswd.pl '$username' '$dat-{passwd}'" . (defined($dat-{oldpasswd}) ? " '$dat-{oldpasswd}'" : undef); my $answer = $system; if ($? != 0) { chomp($answer); push @$error, $answer || "error changing password ($?)"; Here's chpasswd.pl: #!/usr/bin/perl use FileHandle; use IPC::Open3; local $username = shift; local $passwd = shift; local $oldpasswd = shift; local $chat = { 'Old Password: $' => sub { print POUT "$oldpasswd\n"; }, 'New password: $' => sub { print POUT "$passwd\n"; }, 'Re-enter new password: $' => sub { print POUT "$passwd\n"; }, '(.*)\n$' => sub { print "$1\n"; exit 1; } }; local $/ = \1; my $command; if (defined($oldpasswd)) { $command = "sudo -u '$username' /usr/bin/passwd"; } else { $command = "sudo /usr/bin/passwd '$username'"; } $pid = open3(\*POUT, \*PIN, \*PERR, $command) or die; my $buffer; LOOP: while($_ = <PERR>) { $buffer .= $_; foreach (keys(%$chat)) { if ($buffer =~ /$_/i) { $buffer = undef; &{$chat->{$_}}; } } } exit; Could this somehow be adjusted to verify users, but not changing user passwords? The second possibility I see: all pppoe connections are logged in the mysql database. If I could somehow retrieve the username (or uid) of the user connected by pppoe, this could be used to authenticate users. Users could only check their internet connections and costs when they are online (and thus paying money), but this could be tolerated. Here's a line of the script that inserts connections into the database: my $username = $ENV{PEERNAME}; I thought it would be easy to use this variable, but $username seems to be always empty in test-scripts (print $username). Any idea how to retrieve the user connected to the pppoe server? Sorry for the long question! Any help would be very much appreciated. :)

    Read the article

  • In a browser, is it best to use one huge spritesheet or many (10000) different PNG's?

    - by Nick
    I'm creating a game in jQuery, where I use about 10000 32x32 tiles. Until now, I have been using them all separately (no sprite sheet). An average map uses about 2000 tiles (sometimes re-used PNG's but all separate divs) and the performance ranges from stable (Chrome) to a bit laggy (Firefox). Each of these divs are positioned absolutely using CSS. They do not need to be updated every tick, just when a new map is loaded. Would it be better for performance to use spritesheet methods for the divs using CSS background-positioning, like gameQuery does? Thank you in advance!

    Read the article

  • Is it possible that Unity would some day switch back to Mutter?

    - by David
    I remembered that the first Unity was indeed built on Mutter, but later ported to Compiz due to poor performance. I also know Canonical practically incorporated Compiz to work closely for future Unity, so this is getting less likely. But Compiz just seems pretty outdated now that GNOME3/GTK3/Mutter is becoming more mainstreamed, and it is known to deliver some performance issue, but on the other hand Mutter seems pretty good and is still steadily developing now, I'm just wondering if anyone related to the project is still testing and evaluating the possibility of Unity on Mutter? Not that you have to tell me now if you're going to do it or not. I just wanna know if anyone is considering it. Thanks.

    Read the article

  • Oracle R Enterprise 1.1 Download Available

    - by Sherry LaMonica
    Oracle just released the latest update to Oracle R Enterprise, version 1.1. This release includes the Oracle R Distribution (based on open source R, version 2.13.2), an improved server installation, and much more.  The key new features include: Extended Server Support: New support for Windows 32 and 64-bit server components, as well as continuing support for Linux 64-bit server components Improved Installation: Linux 64-bit server installation now provides robust status updates and prerequisite checks Performance Improvements: Improved performance for embedded R script execution calculations In addition, the updated ROracle package, which is used with Oracle R Enterprise, now reads date data by conversion to character strings. We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for R-related software: Oracle R Distribution, Oracle R Enterprise, ROracle, Oracle R Connector for Hadoop.  As always, we welcome comments and questions on the Oracle R Forum.

    Read the article

  • New Exadata and Exalogic public references

    - by Javier Puerta
    The following are new public references for Exadata and Exalogic: Allegis Accelerates HR Processing for 130,000 Contractors  Oracle customer, Allegis, describes how Oracle Exadata and Oracle Exalogic helped consolidate and optimize critical processes running in Oracle's PeopleSoft.  Hyundai Motor Company Document Cuts Repository Management and Access Times Approximately 85%, Saves More Than US$1 Million in Yearly Printing and Paper Costs The company implemented Oracle Exalogic Elastic Cloud, Oracle Exadata Database Machine, Oracle WebLogic, and Oracle WebCenter Content 11g to ensure high performance and stability for its new document-centralization system  University of Minnesota Reduces Data Center Footprint while Enhancing Performance and Manageability with Oracle Exadata Database Machine   Leading Research Institution Consolidates More Than 200 Databases to Approximately 20 while Maximizing Availability for Thousands of Users SThree Prepares to Triple in Size with a Cloud-Based Architecture and a Consolidated, Stable, and Scalable Global Platform  By consolidating 68 databases into a single Oracle Exadata Database Machine, SThree achieved the stability and scalability it needed to support its growth targets. Further enhancements to the organization’s core systems include a planned upgrade for Siebel Contact Center and improved integration with Oracle Fusion Middleware.

    Read the article

  • Newly installed 12.04 on Radeon or NVIDIA type of gfx card?

    - by Falk
    I use a Nvidia Quadro NVS 290/PCIe/SSE2 at work with dual monitor setup. Since 11.10 things has gone downwards with performance. But now with 12.04 performance is even worse, and I can agree that the 290 is old and puny. So when I look what low profile cards I can get to my computer today, the choice it Radeon HD 6570 or NVIDIA Quadro 600. I have always used Nvidia on linux beq I think that historically their drivers worked better. But which one do you recommend today on 12.04 and Unity3d? -- Regards Falk

    Read the article

  • Oracle Unveils Breakthrough Technology: Database In-Memory

    - by Mala Narasimharajan
    Missed Larry Ellison's big announcement this morning? Today, Oracle announced . Oracle Database In-Memory.  Oracle Database In-Memory  transparently extends the power of Oracle Database 12c to enable organizations to discover business insights in real-time while simultaneously increasing transactional performance. Here's why you should care - this new breakthrough technology enables enterprises to get faster answers to business questions ultimately leading to faster business action. Oracle Database In-Memory delivers leading-edge in-memory performance without the need to restrict functionality or accept compromises, complexity and risk. Deploying Oracle Database In-Memory with virtually any existing Oracle Database-compatible application is as easy as flipping a switch--no application changes are required.  For more information on Oracle Database In-Memory go to http://www.oracle.com/us/corporate/press/2215795

    Read the article

  • When should an API favour optimization over readability and ease-of-use?

    - by jmlane
    I am in the process of designing a small library, where one of my design goals is to use as much of the native domain language as possible in the API. While doing so, I've noticed that there are some cases in the API outline where a more intuitive, readable attribute/method call requires some functionally unnecessary encapsulation. Since the final product will not necessarily require high performance, I am unconcerned about making the decision to favour ease-of-use in my current project over the most efficient implementation of the code in question. I know not to assume readability and ease-of-use are paramount in all expected use-cases, such as when performance is required. I would like to know if there are more general reasons that argue for an API design preferring (marginally) more efficient implementations?

    Read the article

  • SQL Saturday #294 - Philadelphia

    SQL Saturday is coming to Philadelphia on June 7, 2014. This event is a free day of training and networking for SQL Server Professionals, organized by the Philadelphia SQL Server User Group. The event also features two paid-for Precons, one presented by Allan Hirt and the other presented jointly by Joseph D'Antoni and Stacia Misner. Register while space is available. FREE eBook – "45 Database Performance Tips for Developers"Improve your database performance with 45 tips from SQL Server MVPs and industry experts. Get the eBook here.

    Read the article

  • When should code favour optimization over readability and ease-of-use?

    - by jmlane
    I am in the process of designing a small library, where one of my design goals is that the API should be as close to the domain language as possible. While working on the design, I've noticed that there are some cases in the code where a more intuitive, readable attribute/method call requires some functionally unnecessary encapsulation. Since the final product will not necessarily require high performance, I am unconcerned about making the decision to favour ease-of-use in my current project over the most efficient implementation of the code in question. I know not to assume readability and ease-of-use are paramount in all expected use-cases, such as when performance is required. I would like to know if there are more general reasons that argue for a design preferring more efficient implementations—even if only marginally so?

    Read the article

  • One Week on New Servers and Everything is Great

    - by Jeff Julian
    It has been a week since we moved our Geekswithblogs.net System to a new set of load balanced servers and everything has been going great.  I am so amazed at the performance of the new hardware.  On average, we only use less than 5% of the CPU at any given moments or the database and web servers.  I have seen a performance boost in page load as well, but I will have to confirm that with the statistics as they roll in.  This is all in preparation for a new community we are launching with some friends that we will be announcing shortly.  We will be launching a nice little contest for our bloggers as well. Technorati Tags: Geekswithblogs.net,Hardware

    Read the article

  • MySQL Workbench 5.2.39 GA Released

    - by user13164789
    The MySQL Developer Tools team is announcing the next maintenance release of its flagship product, MySQL Workbench, version 5.2.39. This version contains MySQL Utilities 1.0.5, a set of command line Python utilities for helping to perform and script various administration tasks for MySQL. A complete list of changes in this release of the Utilities can be found at:http://dev.mysql.com/doc/workbench/en/wb-utils-news-1-0-5.html MySQL Workbench 5.2 GA • Data Modeling • Query (replaces the old MySQL Query Browser) • Administration (replaces the old MySQL Administrator) Please get your copy from our Download site. Sources and binary packages are available for several platforms, including Windows, Mac OS X and Linux. http://dev.mysql.com/downloads/workbench/ Workbench Documentation can be found here. http://dev.mysql.com/doc/workbench/en/index.html Utilities Documentation can be found here.http://dev.mysql.com/doc/workbench/en/mysql-utilities.html In addition to the new Query/SQL Development and Administration modules, version 5.2 features improved stability and performance – especially in Windows, where OpenGL support has been enhanced and the UI was optimized to offer better responsiveness. This release also includes improvements to the scripting capabilities of the SQL Editor. You can read more about it in http://wb.mysql.com/workbench/doc/ For a detailed list of resolved issues, see the change log. http://dev.mysql.com/doc/workbench/en/wb-change-history.html If you need any additional info or help please get in touch with us. Post in our forums or leave comments on our blog pages. - The MySQL Workbench Team

    Read the article

  • Recent Solaris Studio how-to articles

    - by unixman
    There were a few Oracle Solaris Studio articles published recently, check'em out! -How to Develop Code from a Remote Desktop with Oracle Solaris StudioThis article describes the remote desktop feature of the Oracle Solaris Studio IDE, and how to use it to compile, run, debug, and profile your code running on remote servers.-How to Use Remote Development in the IDEThis article describes the modes of remote development available in the Oracle Solaris Studio 12.3 IDE and how to choose the best one for your development environment.-Performance Tips for the Oracle Solaris Studio IDEThis article describes some tips and tricks to help you improve the performance of the Oracle Solaris Studio IDE.

    Read the article

  • MySQL com_select?

    - by symcbean
    I'm looking to tune my query cache a bit. According to 7.6.3.4. Query Cache Status and Maintenance in the manual: The Com_select value is given by this formula: Qcache_inserts + Qcache_not_cached + queries with errors found during the column-privileges check However in 5.1.5. Server Status Variables it suggests that this is maintained by the DBMS. Having said that mysql> show status like 'Com_select%'; Always returns a value of 1 - and I'm pretty sure I've run more than one non-cached select query on my database since it started. It looks as if other people are similarly confused. Is this status variable redundant? Which bit of the manual is wrong? TIA

    Read the article

  • My computer freezes after some minutes while watching dvb tv from kaffeine but only when i use the cpu scaling utilities

    - by digitalcrow
    My computer freezes after some minutes while watching dvb tv from kaffeine but only when i use the cpu scaling utilities to enter cpu on performance mode. It freezes completely repeating a small sound loop from tv. cant go to terminal etc. I have a phenom II 3,40 ghz dualcore cpu , but i unlocked the other two cores so now its quadcore. On Windows i don't have any problems and i use the performance mode all the time and never breaks. I'm really dissapointed....

    Read the article

  • Using Queries with Coherence Write-Behind Caches

    - by jpurdy
    Applications that use write-behind caching and wish to query the logical entity set have the option of querying the NamedCache itself or querying the database. In the former case, no particular restrictions exist beyond the limitations intrinsic to the Coherence query engine itself. In the latter case, queries may see partially committed transactions (e.g. with a parent-child relationship, the version of the parent may be different than the version of the child objects) and/or significant version skew (the query may see the current version of one object and a far older version of another object). This is consistent with "read committed" semantics, but the read skew may be far greater than would ever occur in a non-cached environment. As is usually the case, the application developer may choose to accept these limitations (with the hope that they are sufficiently infrequent), or they may choose to validate the reads (perhaps via a version flag on the objects). This also applies to situations where a third party application (such as a reporting tool) is querying the database. In many cases, the database may only be in a consistent state after the Coherence cluster has been halted.

    Read the article

  • Increasing efficiency of N-Body gravity simulation

    - by Postman
    I'm making a space exploration type game, it will have many planets and other objects that will all have realistic gravity. I currently have a system in place that works, but if the number of planets goes above 70, the FPS decreases an practically exponential rates. I'm making it in C# and XNA. My guess is that I should be able to do gravity calculations between 100 objects without this kind of strain, so clearly my method is not as efficient as it should be. I have two files, Gravity.cs and EntityEngine.cs. Gravity manages JUST the gravity calculations, EntityEngine creates an instance of Gravity and runs it, along with other entity related methods. EntityEngine.cs public void Update() { foreach (KeyValuePair<string, Entity> e in Entities) { e.Value.Update(); } gravity.Update(); } (Only relevant piece of code from EntityEngine, self explanatory. When an instance of Gravity is made in entityEngine, it passes itself (this) into it, so that gravity can have access to entityEngine.Entities (a dictionary of all planet objects)) Gravity.cs namespace ExplorationEngine { public class Gravity { private EntityEngine entityEngine; private Vector2 Force; private Vector2 VecForce; private float distance; private float mult; public Gravity(EntityEngine e) { entityEngine = e; } public void Update() { //First loop foreach (KeyValuePair<string, Entity> e in entityEngine.Entities) { //Reset the force vector Force = new Vector2(); //Second loop foreach (KeyValuePair<string, Entity> e2 in entityEngine.Entities) { //Make sure the second value is not the current value from the first loop if (e2.Value != e.Value ) { //Find the distance between the two objects. Because Fg = G * ((M1 * M2) / r^2), using Vector2.Distance() and then squaring it //is pointless and inefficient because distance uses a sqrt, squaring the result simple cancels that sqrt. distance = Vector2.DistanceSquared(e2.Value.Position, e.Value.Position); //This makes sure that two planets do not attract eachother if they are touching, completely unnecessary when I add collision, //For now it just makes it so that the planets are not glitchy, performance is not significantly improved by removing this IF if (Math.Sqrt(distance) > (e.Value.Texture.Width / 2 + e2.Value.Texture.Width / 2)) { //Calculate the magnitude of Fg (I'm using my own gravitational constant (G) for the sake of time (I know it's 1 at the moment, but I've been changing it) mult = 1.0f * ((e.Value.Mass * e2.Value.Mass) / distance); //Calculate the direction of the force, simply subtracting the positions and normalizing works, this fixes diagonal vectors //from having a larger value, and basically makes VecForce a direction. VecForce = e2.Value.Position - e.Value.Position; VecForce.Normalize(); //Add the vector for each planet in the second loop to a force var. Force = Vector2.Add(Force, VecForce * mult); //I have tried Force += VecForce * mult, and have not noticed much of an increase in speed. } } } //Add that force to the first loop's planet's position (later on I'll instead add to acceleration, to account for inertia) e.Value.Position += Force; } } } } I have used various tips (about gravity optimizing, not threading) from THIS question (that I made yesterday). I've made this gravity method (Gravity.Update) as efficient as I know how to make it. This O(N^2) algorithm still seems to be eating up all of my CPU power though. Here is a LINK (google drive, go to File download, keep .Exe with the content folder, you will need XNA Framework 4.0 Redist. if you don't already have it) to the current version of my game. Left click makes a planet, right click removes the last planet. Mouse moves the camera, scroll wheel zooms in and out. Watch the FPS and Planet Count to see what I mean about performance issues past 70 planets. (ALL 70 planets must be moving, I've had 100 stationary planets and only 5 or so moving ones while still having 300 fps, the issue arises when 70+ are moving around) After 70 planets are made, performance tanks exponentially. With < 70 planets, I get 330 fps (I have it capped at 300). At 90 planets, the FPS is about 2, more than that and it sticks around at 0 FPS. Strangely enough, when all planets are stationary, the FPS climbs back up to around 300, but as soon as something moves, it goes right back down to what it was, I have no systems in place to make this happen, it just does. I considered multithreading, but that previous question I asked taught me a thing or two, and I see now that that's not a viable option. I've also thought maybe I could do the calculations on my GPU instead, though I don't think it should be necessary. I also do not know how to do this, it is not a simple concept and I want to avoid it unless someone knows a really noob friendly simple way to do it that will work for an n-body gravity calculation. (I have an NVidia gtx 660) Lastly I've considered using a quadtree type system. (Barnes Hut simulation) I've been told (in the previous question) that this is a good method that is commonly used, and it seems logical and straightforward, however the implementation is way over my head and I haven't found a good tutorial for C# yet that explains it in a way I can understand, or uses code I can eventually figure out. So my question is this: How can I make my gravity method more efficient, allowing me to use more than 100 objects (I can render 1000 planets with constant 300+ FPS without gravity calculations), and if I can't do much to improve performance (including some kind of quadtree system), could I use my GPU to do the calculations?

    Read the article

  • MVC pattern synchronisation

    - by Hariprasad
    I am facing a problem in synchronizing my model and view threads I have a view which is table. In it, user can select a few rows. I update the view as soon as the user clicks on any row since I don't want the UI to be slow. This updating is done by a logic which runs in the controller thread below. At the same time, the controller will update the model data too, which takes place in a different thread. i.e., controller puts the query in a queue, which is then executed by the model thread - which is a single-threaded interface. As soon as the query executes, controller will get a signal. Now, In order to keep the view and model synchronized, I will update the view again based on the return value of the query (the data returned by model) - even though I updated the view already for that user action. But, I am facing issues because, its taking a lot of time for the model to return the result, by that time user would have performed multiple clicks. So, as a result of updating the view again based on the information from model, the view sometimes goes back to the state in which the previous clicks were made (Suppose user clicks thrice on different rows. I update the view as soon as the click happens. Also, I update the view when I get data back from the model - which is supposed to be same as the already updated state of the view. Now, when the user clicks third time, I get data for the first click from model. As a result, view goes back to a state which is generated by the first click) Is there any way to handle such a synchronization issue?

    Read the article

< Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >