Search Results

Search found 73305 results on 2933 pages for 'copy run start'.

Page 706/2933 | < Previous Page | 702 703 704 705 706 707 708 709 710 711 712 713  | Next Page >

  • TDD/Tests too much an overhead/maintenance burden?

    - by MeshMan
    So you've heard it many times from those who do not truly understand the values of testing. Just to start things out, I'm a follower of Agile and Testing... I recently had a discussion about performing TDD on a product re-write where the current team does not practice unit testing on any level, and probably have never heard of the dependency injection technique or test patterns/design etc (we won't even get on to clean code). Now, I am fully responsible for the rewrite of this product and I'm told that attempting it in the fashion of TDD, will merely make it a maintenance nightmare and impossible for the team maintain. Furthermore, as it's a front-end application (not web-based), adding tests is pointless, as the business drive changes (by changes they mean improvements of course), the tests will become out of date, other developers who come on to the project in the future will not maintain them and become more of a burden for them to fix etc. I can understand that TDD in a team that does not currently hold any testing experience doesn't sound good, but my argument in this case is that I can teach my practice to those around me, but further more, I know that TDD makes BETTER software. Even if I was to produce the software using TDD, and throw all the tests away on handing it over to a maintenance team, it surely would be a better approach than not using TDD at all from the start? I've been shot down as I've mentioned doing TDD on most projects for a team that have never heard of it. The thought of "interfaces" and strange looking DI constructors scares them off... Can anyone please help me in what is normally a very short conversation of trying to sell TDD and my approach to people? I usually have a very short window of argument before falling at the knees to the company/team.

    Read the article

  • Installation taking a very long time, hangs at "Configuring bcmwl-kernel-source"

    - by user290522
    I am installing Ubuntu 14.04(32-bit) on my laptop (Compaq Presario V2000), and after about 7 hours, it is still in Configuring bcmwl-kernel-source (i386) mode. The messages I read are as follows: ubuntu kernel: [22814.858163] ACPI: \_SB_.PCI0.LPC0.LPC0.ACAD: ACPI_NOTIFY_BUS_CHECK event: unsupported with the numbers in the square brackets increasing. I have had Windows XP professional on this laptop, and I am erasing it. I am not sure if I should turn off the laptop, and start all over again. About 4 years ago I installed Ubuntu on this laptop, and that was very fast. The only problem I encountered was my wireless, and could not make it to work, and switched back to Windows. I appreciate any comments regarding this installation taking such a long time. After 40 hours the installation was still in configuring mode with the following messages: ubuntu CRON[29329]: (root) CMD ( cd/ && run-part .. report /etc/cron-hourly) I did the following to check for errors: pressed ctrl-alt-f2. This time the system froze. I had no other choice but to turn off the laptop, and start all over again. The exact model of the laptop is "Compaq Presario V2069CL Notebook PC" with AMD processor.

    Read the article

  • Help, broken Gsettings

    - by Rene
    I was trying to disable the global menu as per http://ubuntuhandbook.org/index.php/2013/07/disable-global-menu-on-ubuntu-13-10-saucy/#comment-8612, but while it didn't change anything, after running the autoremove command unity-tweak-tool broke. Obviously my first reaction was to re-install the removed package but it remains broken. TBH I don't know if it is even related or just a coincidence. When I start it from the launcher it just blinks and disappear. When I start it from terminal I get this error: $ gnome-tweak-tool WARNING : Shell not installed or running WARNING : Error detecting shell Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/gtweak/tweaks/tweak_shell_extensions.py", line 199, in __init__ raise Exception("Shell not running or DBus service not available") Exception: Shell not running or DBus service not available INFO : GSettings missing key org.gnome.nautilus.desktop (key computer-icon-visible) WARNING : Shell not running None INFO : GSettings missing key org.gnome.mutter (key workspaces-only-on-primary) Segmentation fault (core dumped) I had a look with dconf-editor if I could just add the missing key, but apparently keys aren't meant to be added "by hand". So how can I fix this? I'd rather prefer not having to reinstall everything. Which package is broken, can I just reinstall that? EDIT: I found by being root gnome-tweak-tool no longer crashed so possibly a permission issue somewhere. I don't know that I changed any permissions. Another related problem, actually the reason I noticed the problem at all, is that unity-tweak-tool seem no longer to want to save the values edited. I normally just have the Unity launcher on the primary display but wanted to check what it was like having it on both. I didn't like it so I went into unity-tweak-tool to set it back - but regardless how many time I tick "only primary display" it never changes anything. What does the Unity-tweak-tool actually change and can I do this directly somehow?

    Read the article

  • Implementation details of database synchronisation API

    - by Daniel
    I want to achieve a database synchronisation between my server database and a client application. The server would run MySQL and the applications may run different database technologies, their implementation isn't important. I have a MySQL database online and web accessible via an API I wrote in PHP (just a detail). My client application ships with a copy of the online data. As time passes my goal is to check for any changes in the online database and make these updates available to the client app via an API call, by sending a date to an API endpoint corresponding to the last date the app was updated, the response would be a JSON filled with all new objects and updated objects, and delete IDs, this makes possible to update the local store appropriately. Essentially I want to do this: http://dbconvert.com/synchronization.php My question is about the implementation details. Would I need to add a column to my database tables with a "last modified" date? Since the client app could be very out of date if it's been offline for a long time, does that also mean I shouldn't delete data from the online database but instead have another column called "delete" set to 1 and a modified date updated appropriately? Would my SQL query simply check for all data with a modified date superior then the date passed into the API request by the client? I feel like there's a lot more to it then having a ton of dates everywhere. And also, worry that I will need to persist a lot of old data in order to ensure that old versions of the client app always have the opportunity to delete parts of their data when they are able to sync.

    Read the article

  • Problem to install Apache 2.4.2 in Ubuntu 12.04

    - by Michael
    I followed these steps to install Apache 2.4.2 in Ubuntu 12.04, but it seems Apache is not installed, here's what I did (I followed the steps in this site http://www.discusswire.com/apache-2-4-installation-ubuntu/): sudo apt-get install build-essential sudo apt-get build-dep apache2 wget http://apache.mirrors.pair.com/httpd/httpd-2.4.2.tar.gz tar -xzvf httpd-2.4.2.tar.gz && cd httpd-2.4.2 sudo ./configure --prefix=/usr/local/apache2 --enable-mods-shared=all --enable-deflate --enable-proxy --enable-proxy-balancer --enable-proxy-http --with-mpm=prefork sudo make sudo make install when I tried to start by issuing sudo /usr/local/apache2/bin/apachectl start at terminal, I got the following warning: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message" and when I typed top at terminal, the apache is not there. I also tried to go to http://localhost/ or 127.0.0.1 or even 127.0.1.1 it showed "Can't establish connection to server ..." message. ps: I checked the error log and it showed "[Fri Jul 27 15:49:00.703901 2012] [proxy_balancer:emerg] [pid 20781] AH01177: Failed to lookup provider 'shm' for 'slotmem': is mod_slotmem_shm loaded?? [Fri Jul 27 15:49:00.704083 2012] [:emerg] [pid 20781] AH00020: Configuration Failed, exiting" What I'm missing? Thanks Michael

    Read the article

  • Unable to cast transparent proxy to type &lt;type&gt;

    - by Rick Strahl
    This is not the first time I've run into this wonderful error while creating new AppDomains in .NET and then trying to load types and access them across App Domains. In almost all cases the problem I've run into with this error the problem comes from the two AppDomains involved loading different copies of the same type. Unless the types match exactly and come exactly from the same assembly the typecast will fail. The most common scenario is that the types are loaded from different assemblies - as unlikely as that sounds. An Example of Failure To give some context, I'm working on some old code in Html Help Builder that creates a new AppDomain in order to parse assembly information for documentation purposes. I create a new AppDomain in order to load up an assembly process it and then immediately unload it along with the AppDomain. The AppDomain allows for unloading that otherwise wouldn't be possible as well as isolating my code from the assembly that's being loaded. The process to accomplish this is fairly established and I use it for lots of applications that use add-in like functionality - basically anywhere where code needs to be isolated and have the ability to be unloaded. My pattern for this is: Create a new AppDomain Load a Factory Class into the AppDomain Use the Factory Class to load additional types from the remote domain Here's the relevant code from my TypeParserFactory that creates a domain and then loads a specific type - TypeParser - that is accessed cross-AppDomain in the parent domain:public class TypeParserFactory : System.MarshalByRefObject,IDisposable { …/// <summary> /// TypeParser Factory method that loads the TypeParser /// object into a new AppDomain so it can be unloaded. /// Creates AppDomain and creates type. /// </summary> /// <returns></returns> public TypeParser CreateTypeParser() { if (!CreateAppDomain(null)) return null; /// Create the instance inside of the new AppDomain /// Note: remote domain uses local EXE's AppBasePath!!! TypeParser parser = null; try { Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); } catch (Exception ex) { this.ErrorMessage = ex.GetBaseException().Message; return null; } return parser; } private bool CreateAppDomain(string lcAppDomain) { if (lcAppDomain == null) lcAppDomain = "wwReflection" + Guid.NewGuid().ToString().GetHashCode().ToString("x"); AppDomainSetup setup = new AppDomainSetup(); // *** Point at current directory setup.ApplicationBase = AppDomain.CurrentDomain.BaseDirectory; //setup.PrivateBinPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "bin"); this.LocalAppDomain = AppDomain.CreateDomain(lcAppDomain,null,setup); // Need a custom resolver so we can load assembly from non current path AppDomain.CurrentDomain.AssemblyResolve += new ResolveEventHandler(CurrentDomain_AssemblyResolve); return true; } …} Note that the classes must be either [Serializable] (by value) or inherit from MarshalByRefObject in order to be accessible remotely. Here I need to call methods on the remote object so all classes are MarshalByRefObject. The specific problem code is the loading up a new type which points at an assembly that visible both in the current domain and the remote domain and then instantiates a type from it. This is the code in question:Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); The last line of code is what blows up with the Unable to cast transparent proxy to type <type> error. Without the cast the code actually returns a TransparentProxy instance, but the cast is what blows up. In other words I AM in fact getting a TypeParser instance back but it can't be cast to the TypeParser type that is loaded in the current AppDomain. Finding the Problem To see what's going on I tried using the .NET 4.0 dynamic type on the result and lo and behold it worked with dynamic - the value returned is actually a TypeParser instance: Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; object objparser = this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); // dynamic works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // casting fails parser = (TypeParser)objparser; So clearly a TypeParser type is coming back, but nevertheless it's not the right one. Hmmm… mysterious.Another couple of tries reveal the problem however:// works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // c:\wwapps\wwhelp\wwReflection20.dll (Current Execution Folder) string info3 = typeof(TypeParser).Assembly.CodeBase; // c:\program files\vfp9\wwReflection20.dll (my COM client EXE's folder) string info4 = dynParser.GetType().Assembly.CodeBase; // fails parser = (TypeParser)objparser; As you can see the second value is coming from a totally different assembly. Note that this is even though I EXPLICITLY SPECIFIED an assembly path to load the assembly from! Instead .NET decided to load the assembly from the original ApplicationBase folder. Ouch! How I actually tracked this down was a little more tedious: I added a method like this to both the factory and the instance types and then compared notes:public string GetVersionInfo() { return ".NET Version: " + Environment.Version.ToString() + "\r\n" + "wwReflection Assembly: " + typeof(TypeParserFactory).Assembly.CodeBase.Replace("file:///", "").Replace("/", "\\") + "\r\n" + "Assembly Cur Dir: " + Directory.GetCurrentDirectory() + "\r\n" + "ApplicationBase: " + AppDomain.CurrentDomain.SetupInformation.ApplicationBase + "\r\n" + "App Domain: " + AppDomain.CurrentDomain.FriendlyName + "\r\n"; } For the factory I got: .NET Version: 4.0.30319.239wwReflection Assembly: c:\wwapps\wwhelp\bin\wwreflection20.dllAssembly Cur Dir: c:\wwapps\wwhelpApplicationBase: C:\Programs\vfp9\App Domain: wwReflection534cfa1f For the instance type I got: .NET Version: 4.0.30319.239wwReflection Assembly: C:\\Programs\\vfp9\wwreflection20.dllAssembly Cur Dir: c:\\wwapps\\wwhelpApplicationBase: C:\\Programs\\vfp9\App Domain: wwDotNetBridge_56006605 which clearly shows the problem. You can see that both are loading from different appDomains but the each is loading the assembly from a different location. Probably a better solution yet (for ANY kind of assembly loading problem) is to use the .NET Fusion Log Viewer to trace assembly loads.The Fusion viewer will show a load trace for each assembly loaded and where it's looking to find it. Here's what the viewer looks like: The last trace above that I found for the second wwReflection20 load (the one that is wonky) looks like this:*** Assembly Binder Log Entry (1/13/2012 @ 3:06:49 AM) *** The operation was successful. Bind result: hr = 0x0. The operation completed successfully. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework\V4.0.30319\clr.dll Running under executable c:\programs\vfp9\vfp9.exe --- A detailed error log follows. === Pre-bind state information === LOG: User = Ras\ricks LOG: DisplayName = wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null (Fully-specified) LOG: Appbase = file:///C:/Programs/vfp9/ LOG: Initial PrivatePath = NULL LOG: Dynamic Base = NULL LOG: Cache Base = NULL LOG: AppName = vfp9.exe Calling assembly : (Unknown). === LOG: This bind starts in default load context. LOG: Using application configuration file: C:\Programs\vfp9\vfp9.exe.Config LOG: Using host configuration file: LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\V4.0.30319\config\machine.config. LOG: Policy not being applied to reference at this time (private, custom, partial, or location-based assembly bind). LOG: Attempting download of new URL file:///C:/Programs/vfp9/wwReflection20.DLL. LOG: Assembly download was successful. Attempting setup of file: C:\Programs\vfp9\wwReflection20.dll LOG: Entering run-from-source setup phase. LOG: Assembly Name is: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null LOG: Binding succeeds. Returns assembly from C:\Programs\vfp9\wwReflection20.dll. LOG: Assembly is loaded in default load context. WRN: The same assembly was loaded into multiple contexts of an application domain: WRN: Context: Default | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: Context: LoadFrom | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: This might lead to runtime failures. WRN: It is recommended to inspect your application on whether this is intentional or not. WRN: See whitepaper http://go.microsoft.com/fwlink/?LinkId=109270 for more information and common solutions to this issue. Notice that the fusion log clearly shows that the .NET loader makes no attempt to even load the assembly from the path I explicitly specified. Remember your Assembly Locations As mentioned earlier all failures I've seen like this ultimately resulted from different versions of the same type being available in the two AppDomains. At first sight that seems ridiculous - how could the types be different and why would you have multiple assemblies - but there are actually a number of scenarios where it's quite possible to have multiple copies of the same assembly floating around in multiple places. If you're hosting different environments (like hosting the Razor Engine, or ASP.NET Runtime for example) it's common to create a private BIN folder and it's important to make sure that there's no overlap of assemblies. In my case of Html Help Builder the problem started because I'm using COM interop to access the .NET assembly and the above code. COM Interop has very specific requirements on where assemblies can be found and because I was mucking around with the loader code today, I ended up moving assemblies around to a new location for explicit loading. The explicit load works in the main AppDomain, but failed in the remote domain as I showed. The solution here was simple enough: Delete the extraneous assembly which was left around by accident. Not a common problem, but one that when it bites is pretty nasty to figure out because it seems so unlikely that types wouldn't match. I know I've run into this a few times and writing this down hopefully will make me remember in the future rather than poking around again for an hour trying to debug the issue as I did today. Hopefully it'll save some of you some time as well in the future.© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • What about introduction to programming with C# via LINQPad?

    - by Gulshan
    From different questions/answers/articles in this and some other sites, I got the idea that the introductory language for programming should be- High level Less verbose C# is one of the heavily used high level languages being used these days. It's also multi-paradigm and descendant of C, the lingua-franca of all programming languages. So, I think it has the potential to be the introductory programming language. But I felt it's a bit verbose for the novice learners. Then LINQPad came into my mind. With LINQPad, someone can start with C# without it's verbosity. Because you can just run one statement or few statements or a standalone function with LINQPad. Again you can run a full source file also. Another thing it provide is- using SQL. So, it can be used for learning SQL too. And not to mention, it's free. So, what you guys think about the idea of introducing programming with C# via LINQPad? Any thing to watch out? Any suggestion?

    Read the article

  • Scaling background without scaling foreground in platformer?

    - by David Xu
    I'm currently developing a platform game and I've run into a problem with scaling resolutions. I want a different resolution of the game to still display the foreground unscaled (characters, tiles, etc) but I want the background to be scaled to fit into the window. To explain this better, my viewport has 4 variables: (x, y, width, height) where x and y are the top left corner and width and height are the dimensions. These can be either 800x600, 1024x768 or 1280x960. When I design my levels, I design everything for the highest resolution (1280x960) and expect the game engine to scale it down if a user is running in a lower resolution. I have tried the following to make it work but nothing I've come up with solves it so far: scale = view->width/1280; drawX = x * scale; drawY = y * scale; (this makes the translation too small for low resolution) and scale = view->width/1280; bgWidth = background->width*scale; bgHeight = background->height*scale; drawX = x + background->width/2 - bgWidth/2; drawY = y + background->height/2 - bgHeight/2; (this makes the translation completely wrong at the edges of the map) The thing is, no matter what resolution the game is run at, the map remains the same size, and the foreground is unscaled. (With a lower resolution you just see less of the foreground in the viewport) I was wondering if anyone had any idea how to solve this problem? Thank you in advance!

    Read the article

  • What should I quote for a project I hope to get a job at the end of?

    - by thesunneversets
    Long story short: I applied for a (CakePHP, MySQL, etc) development job in London, UK. I grew up in Britain but am currently based quite a few thousand miles away in Canada, so I wasn't really expecting success. But quite a few emails and phone interviews later it seems that they really like me. At least to a point. Because such a major relocation would be a horrible thing to go wrong, they've sensibly suggested a trial run of getting me to build a website at a distance. I have the spec for this and it's quite a substantial amount of work. My problem is that I now need to suggest both a fee and a timescale for the job, and I haven't got any significant experience of working as a contractor. Looking at the spec, which is 1500 words of many concisely stated features, some fairly trivial and some moderately involved, I can easily imagine there being 2 weeks of intensive work there. (If everything went really well it might be closer to one week, but even though I want to impress, I definitely don't want to fall into the inexperienced-contractor trap of massively underestimating the amount of time a project will run to.) As an extra complication, there is no expectation that I should give up my day job to get this trial project done, so the hours will have to be clawed from evenings and weekends. I don't want to overcommit to a quick delivery date, only to find myself swiftly burning out due to an unrealistic workload. So, any advice for me? My main question is, what is a realistic hourly figure to demand of a stable but not excessively wealthy London-based company in the current market, bearing in mind that I'd like them to hire me afterwards? But any more general recommendations based on my circumstances above would be much appreciated too. Many thanks!

    Read the article

  • My first encounter with SmartAssembly

    - by Peter Larsson
    Let me start by writing I am a supreme VB6 programmer, but I have very little experience with VB.Net, so I think I still need some more time learning SmartAssembly. SmartAssembly make obfuscating and merging dll files a piece of cake! With it's simple, straight forward and clean GUI I did make my tests work. With other obfuscators like Xenocode, Salamander etc which lets you (and in some cases forces you) control more advanced settings, you really have to know what you are doing. Especially when it comes to protecting code that uses external dependencies. My most annoying experience is that if you start checking radio buttons and activating different obfuscating features in SmartAssembly, you will end up breaking your working code as well, if you like me is not that experienced and don't know what you’re doing. SmartAssembly have some troubleshooting information on their website which explains why the application will fail in some scenarios. So why not extend these checks in some deeper analyzing stage on the dll's? By doing that I think more people could get fully functional dll's out of the box instead of trying different settings and then test the protected dll and see if it's working or not. //Peter

    Read the article

  • Preventing battery from charging

    - by intuited
    I'm running on UPS power and would like to prevent the laptop's battery from charging, to increase the amount of power available to other devices. Is there a way to do this? update The machine is a Dell Latitude D400. If people want more details, just ask. Also, I'm gathering that I need to explain my desired setup a little better. I've gotten a bunch of suggestions about taking the battery out. I'm not sure if people are suggesting to take the battery out while the machine is running — this, as I understand, is not a good idea with most laptops — or to just remove the battery altogether. The latter option is not optimal, because ideally I'd like to use the 30-60 minutes of power in the laptop battery and then switch over to UPS power. The details of the switch-over may constitute a separate question, but if I can't find a way to keep the laptop battery from charging, then removing the battery from the machine altogether may be the best way to do this. I'm not sure yet if this machine will run without a battery, but I'll check that out. Other than the laptop, the UPS is just supporting a cable modem and router and a USB hub. Again in the idealized version of this setup, all the power management changes would be automated, i.e. not require replugging anything or pressing Fn-keys. I'd like the machine to start using laptop battery power when apcupsd indicates that the UPS A/C is out, and then start using UPS power, but not charging the battery, when the battery is almost depleted.

    Read the article

  • Cannot uninstall myspell-he

    - by Yuval Rabinovich
    I tried to install a Hebrew spell checker for LibreOffice. I downloaded a package named myspell-he_1.1-1_all.deb and tried to install it through the software center, installation failed and I can neither complete installation nor remove it. Since then, whenever I run Update Manager I get an error message: The package system is broken, In details: The following packages have unmet dependencies: dictionaries-common: When I try to run the Software center I get a pop-up window: Items cannot be installed or removed until the package catalog is repaired. A Repair option is suggested. If I click it the following nessage appears: installArchives() failed: dpkg: dependency problems prevent configuration of myspell-he: dictionaries-common (1.12.1ubuntu2) breaks myspell-he (<= 1.1-1) and is installed. Version of myspell-he to be configured is 1.1-1. dpkg: error processing myspell-he (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: myspell-he Error in function: dpkg: dependency problems prevent configuration of myspell-he: dictionaries-common (1.12.1ubuntu2) breaks myspell-he (<= 1.1-1) and is installed. Version of myspell-he to be configured is 1.1-1. dpkg: error processing myspell-he (--configure): dependency problems - leaving unconfigured How can I clear this mess?

    Read the article

  • Project Euler 14: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 14.  As always, any feedback is welcome. # Euler 14 # http://projecteuler.net/index.php?section=problems&id=14 # The following iterative sequence is defined for the set # of positive integers: # n -> n/2 (n is even) # n -> 3n + 1 (n is odd) # Using the rule above and starting with 13, we generate # the following sequence: # 13 40 20 10 5 16 8 4 2 1 # It can be seen that this sequence (starting at 13 and # finishing at 1) contains 10 terms. Although it has not # been proved yet (Collatz Problem), it is thought that all # starting numbers finish at 1. Which starting number, # under one million, produces the longest chain? # NOTE: Once the chain starts the terms are allowed to go # above one million. import time start = time.time() def collatz_length(n): # 0 and 1 return self as length if n <= 1: return n length = 1 while (n != 1): if (n % 2 == 0): n /= 2 else: n = 3*n + 1 length += 1 return length starting_number, longest_chain = 1, 0 for x in xrange(1, 1000001): l = collatz_length(x) if l > longest_chain: starting_number, longest_chain = x, l print starting_number print longest_chain # Slow 31 seconds print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • Grub options are not visible on booting on Samsung ATIV Book 9 Lite running Ubuntu 14.04

    - by mjwittering
    I've managed to install Ubuntu 14.04 on my new Samsung ATIV Book 9 Lite ultrabook. After updating some configuratiosn in the UEFI installation was very easy. The only questions and issue I believe I'm still experience is when booting. I believe when the laptop would be displaying the grub boot options I see the following. There is a black screen with a purple border of 10px around the screen. I'd like to know how I can update my system so that I see the grub boot manager. I've run these commands: sudo cat /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" The command was not possible, sudo efibootmgr.

    Read the article

  • Is HTML5/WebGL performance bad on low-end Android tablets and phones?

    - by Boris van Schooten
    I've developed a couple of WebGL games, and am trying them out on Android. I found that they run very slowly on my tablet, however. For example, a game with 10 sprites or so runs as 5fps. I tried Chrome and CocoonJS, but they are comparably slow. I also tried other games, and even games with only 5 or so moving sprites are this slow. This seems inconsistent with reports from others, such as this benchmark. Typically, when people talk about HTML5 game performance, they mention well-known and higher-end phones and tables. While my 7" tablet is cheap (I believe it's a relabeled Allwinner tablet, apparently with the Mali 400 GPU), I found it generally has a good gaming performance. All the games I tried run smoothly. I also developed an OpenGL ES 2 demo with 200 shaded 3D objects, and it ran at 50fps. My suspicion is that many low-end and white-label devices may have unacceptable HTML5/WebGL support, which means there may be a large section of gamers you will not reach when you choose this as your platform. I've heard rumors about inconsistent performance of HTML5 and WebGL on different devices, but no clear picture emerges. I would like to hear if any of you have had similar experiences with HTML5 or WebGL, or whether I can find information about the percentage of devices I can expect to have decent performance.

    Read the article

  • mount another drive to the same directory

    - by Ken Autotron
    I recently purchased a server that was advertised as 2TB (2 1TB drives) in size, when I use it it reports only one of the drives, I would like to be able to use both as if one drive. here is the specs... sudo lshw -C disk *-disk description: ATA Disk product: TOSHIBA DT01ACA1 vendor: Toshiba physical id: 0.0.0 bus info: scsi@1:0.0.0 logical name: /dev/sda version: MS2O serial: 13EJ81XPS size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=0005b3dd *-disk description: ATA Disk product: TOSHIBA DT01ACA1 vendor: Toshiba physical id: 0.0.0 bus info: scsi@4:0.0.0 logical name: /dev/sdb version: MS2O serial: 13OX3TKPS size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=00030e86 and fdisk -l Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00030e86 Device Boot Start End Blocks Id System /dev/sdb1 * 4096 41947135 20971520 fd Linux raid autodetect /dev/sdb2 41947136 1952468991 955260928 fd Linux raid autodetect /dev/sdb3 1952468992 1953519615 525312 82 Linux swap / Solaris Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0005b3dd Device Boot Start End Blocks Id System /dev/sda1 * 4096 41947135 20971520 fd Linux raid autodetect /dev/sda2 41947136 1952468991 955260928 fd Linux raid autodetect /dev/sda3 1952468992 1953519615 525312 82 Linux swap / Solaris Disk /dev/md2: 978.2 GB, 978187124736 bytes 2 heads, 4 sectors/track, 238815216 cylinders, total 1910521728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/md2 doesn't contain a valid partition table Disk /dev/md1: 21.5 GB, 21474770944 bytes 2 heads, 4 sectors/track, 5242864 cylinders, total 41942912 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/md1 doesn't contain a valid partition table is it possible to mount both drives to say /Home/ so I would have 2TB of usable space?

    Read the article

  • TRADACOMS Support in B2B

    - by Dheeraj Kumar
    TRADACOMS is an initial standard for EDI. Predominantly used in the retail sector of United Kingdom. This is similar to EDIFACT messaging system, involving ecs file for translation and validation of messages. The slight difference between EDIFACT and TRADACOMS is, 1. TRADACOMS is a simpler version than EDIFACT 2. There is no Functional Acknowledgment in TRADACOMS 3. Since it is just a business message to be sent to the trading partner, the various reference numbers at STX, BAT, MHD level need not be persisted in B2B as there is no Business logic gets derived out of this. Considering this, in AS11 B2B, this can be handled out of the box using Positional Flat file document plugin. Since STX, BAT segments which define the envelope details , and part of transaction, has to be sent from the back end application itself as there is no Document protocol parameters defined in B2B. These would include some of the identifiers like SenderCode, SenderName, RecipientCode, RecipientName, reference numbers. Additionally the batching in this case be achieved by sending all the messages of a batch in a single xml from backend application, containing total number of messages in Batch as part of EOB (Batch trailer )segment. In the case of inbound scenario, we can identify the document based on start position and end position of the incoming document. However, there is a plan to identify the incoming document based on Tradacom standard instead of start/end position. Please email to [email protected] if you need a working sample.

    Read the article

  • Asus Eee PC 1000HE wireless woes

    - by Vladimir Noobokov
    Ever since I have upgraded my Asus Eee PC 1000HE from Lucid 10.04 to Precise 12.04 I have been having issues with my wireless connections. At first I had wireless dropouts: I would be able to start using wireless, but then after a few minutes the wireless would stop working even though I was still connected to the network. Lately things turned worse: while I connect to my wireless network, it just never works. I tried all sorts of solutions on offer here and in other forums but none worked. At best I got the wireless to work up until I rebooted, at which point I would get the same symptoms again: the wireless network is there, but it's not really working. By now I tried so many different "solutions" I don't know where to start describing them; I have also reinstalled 12.04 several times, enough to make me lose faith in Ubuntu. Help here looks like my last resort. For the record, my Asus Eee PC 1000HE is equipped with an Atheros wireless card. I have reinstalled 12.04, ran all the suggested updates, and receive the following response when I type iwconfig in the terminal: lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"Arsenal" Mode:Managed Frequency:2.452 GHz Access Point: 00:04:ED:48:67:89 Bit Rate=1 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=70/70 Signal level=-29 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:27 Invalid misc:57 Missed beacon:0 eth0 no wireless extensions. Thanks in advance for any help that might be offered.

    Read the article

  • Ubuntu 12.04, xbmc, opengl, intel motherboard

    - by Sean Hagen
    I've got an HTPC that I built myself, with a Asus P5G41T-M Motherboard. It's got an on-board HDMI port, and I've been using that with no problems. I started out with Mythbuntu ( an older version ), and recently updated to 12.04.1 LTS without any issues. I've been thinking about trying out XBMC for a while, and I decided to give it a go. Unfortunately, I seem to be running into quite a few issues. I got XBMC installed from the repos without any issues, but when I try to run it from a console, a box pops up with the following: XBMC needs hardware accelerated OpenGL rendering. Install an appropriate graphics driver. Please consule XBMC Wiki for supported hardware http://wiki.xbmc.org/?title=Supported_hardware In the console, it prints out the following: X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 136 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 12 Current serial number in output stream: 12 When I run vainfo, I get this: libva: VA-API version 0.32.0 libva: va_getDriverName() returns 0 libva: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so libva: va_openDriver() returns 0 vainfo: VA-API version: 0.32 (libva 1.0.15) vainfo: Driver version: Intel i965 driver - 1.0.15 vainfo: Supported profile and entrypoints VAProfileMPEG2Simple : VAEntrypointVLD VAProfileMPEG2Main : VAEntrypointVLD The file /usr/lib/x86_64-linux-gnu/dri/i964_drv_video.so exists: # ls -l /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so -rw-r--r-- 1 root root 628728 Mar 29 2012 /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so And in /var/log/Xorg.0.log the following error pops up: GLX error: Can not get required symbols. I'm not really sure where to go from here. I've tried searching all over for how to fix this problem. I've done "apt-get --reinstall xserver-xorg" ( as well as a few other video driver packages ) a few times, and no change. Any help in getting this issue sorted out would be awesome.

    Read the article

  • Multithreading in Windows Phone 7 emulator: A bug

    - by Laurent Bugnion
    Multithreading is supported in Windows Phone 7 Silverlight applications, however the emulator has a bug (which I discovered and was confirmed to me by the dev lead of the emulator team): If you attempt to start a background thread in the MainPage constructor, the thread never starts. The reason is a problem with the emulator UI thread which doesn’t leave any time to the background thread to start. Thankfully there is a workaround (see code below). Also, the bug should be corrected in a future release, so it’s not a big deal, even though it is really confusing when you try to understand why the *%&^$£% thread is not &$%&%$£ starting (that was me in the plane the other day ;) This code does not work: public partial class MainPage : PhoneApplicationPage { public MainPage() { InitializeComponent(); SupportedOrientations = SupportedPageOrientation.Portrait | SupportedPageOrientation.Landscape; var counter = 0; ThreadPool.QueueUserWorkItem(o => { while (true) { Dispatcher.BeginInvoke(() => { textBlockListTitle.Text = (counter++).ToString(); }); } }); } } This code does work: public MainPage() { InitializeComponent(); SupportedOrientations = SupportedPageOrientation.Portrait | SupportedPageOrientation.Landscape; var counter = 0; ThreadPool.QueueUserWorkItem(o => { while (true) { Dispatcher.BeginInvoke(() => { textBlockListTitle.Text = (counter++).ToString(); }); // NOTICE THIS LINE!!! Thread.Sleep(0); } }); } Note that even if the thread is started in a later event (for example Click of a Button), the behavior without the Thread.Sleep(0) is not good in the emulator. As of now, i would recommend always sleeping when starting a new thread. Happy coding: Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • The lifecycle of "cool"

    - by Dori
    I've been thinking lately about how some programming projects/products become "cool," and in particular, how that trend can later reverse. Here are two examples that might better explain my context: Textmate Whenever someone asks about text editors on OS X, the answer on the SE sites is an automatic "Textmate!" But looked at objectively: Textmate 1.0 shipped October 2004 Textmate 1.5 shipped January 2006 Textmate 2 was announced February 2006 As of September 2010, the currently shipping version is 1.5.9 In all of 2010, there have been a total of three posts on the Textmate blog At what point (if ever) do Textmate fans start thinking about switching to another text editor? When it breaks after some future Apple update? When alpha geeks they respect start recommending something else? Or? jQuery Whenever a JavaScript-related question is asked on the SE sites, the knee-jerk response is "jQuery!" I've seen it happen even when the question itself only required a single line of JavaScript. Or when the question could be better answered by using CSS. Do the answerers understand they're suggesting a blowtorch to light a candle? That they're recommending adding 70K or so of code to do something trivial? Or is it a symptom of "When you have a hammer, everything looks like a nail"—that is, jQuery is all they know how to do, so that's their recommendation? And do they understand that while they may know jQuery well, that doesn't necessarily mean that they know JavaScript? Is there a way to explain that learning JavaScript would make them better jQuery programmers? My bigger-picture questions: Is this niche focus primarily a trait of programmers? How do you get programmers to not immediately jump to recommending their personal favorites? What can motivate programmers to review their initial selection criteria and possibly modify their choice? Your thoughts?

    Read the article

  • New database profiling support in ANTS Performance Profiler

    - by Ben Emmett
    In May last year, the ANTS Performance Profiler team added the ability to profile database requests your application makes to SQL Server or Oracle. The really cool thing is that you’re shown those requests in the application’s call tree, so you can see what .NET code caused those queries to run. It’s particularly helpful if you’re using an ORM which automagically generates and runs queries for you, but which doesn’t necessarily do it in the most efficient way possible. Now by popular demand, we’ve added support for profiling MySQL (or MariaDB) and PostgreSQL, so you can see queries run against those databases too. Some of you have also said that you’re using the Devart dotConnect data providers instead of the native .NET ones, so we’ve added support for those drivers too. Hope it helps! For the record, here’s a list of supported connectors (ones in bold are new): SQL Server .NET Framework Data Provider Devart dotConnect for SQL Server Oracle .NET Framework Data Provider Oracle Data Provider for .NET Devart dotConnect for Oracle MySQL / MariaDB MySQL Connector/Net Devart dotConnect for MySQL PostgreSQL Npgsql .NET Data Provider for PostgreSQL Devart dotConnect for PostgreSQL SQL Server Compact Edition .NET Framework Data Provider for SQL Server Compact Edition Devart dotConnect for SQL Server Pro Have we missed a connector or database which you’d find useful? Tell us about it in the comments or by emailing [email protected]. Ben

    Read the article

  • ADF EMG Task Flow Tester Now Available!

    - by Steven Davelaar
    Testing ADF applications has become much easier as of today. At the ADF EMG day at Oracle Open World a new tool was announced, the ADF EMG Task Flow Tester.  The ADF EMG Task Flow Tester is a web-based testing tool for ADF bounded task flows. It supports testing of task flows that use pages as well as task flows using page fragments. A sophisticated mechanism to specify task flow input parameters is provided. A set of task flow input parameters and run options can be saved as a task flow testcase. Task flows and their testcases can be exported to XML and imported from XML.      This ADF EMG task Flow Tester can help you in a number of ways: It allows you to unit test your task flows in complete isolation, ruling out dependencies with other task flows when finding and investigating issues. It allows you to quickly test various combinations of task flow input parameter without redeploying the application It keeps your application cleaner (and saves time) as you no longer need to create separate test pages for each and every bounded task flow with page fragments that you used to create before. You can use the tester to simulate a call to your task flow so you can easily test task flow return values and the return navigation outcome. The tool is easy to install as a JDeveloper extension, and easy to use. Check out the Getting Started section in the User Guide and you will be up and running in 5 minutes! Your feedback is most welcome, if you run into issues or have enhancement requests, then check out this page.

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Fix invalid objects and components - BEFORE you upgrade!

    - by Mike Dietrich
    We are currently running a Tech Challange Workshop with 25 Oracle consultants and support folks from all over EMEA. We call it Tech Challange because we seperate these experts having between 5 and 20 years of Oracle experience into 5 groups - and each group has to complete their special challange such as moving a database from 10.2 to Exadata V2 or upgrading from single instance 10.2 to Real Application Clusters 11.2 with the new Grid Infrastructure. Actually we start this training with a bit presentation pieces about upgrades, Real Application Testing and Golden Gate. And one topic I always point out: Keep your database tidy before the upgrade!!! Clean up all invalid objects - especially in SYS and SYSTEM user schema BEFORE you upgrade. Use utlrp.sql to recompile invalid objects. Use Note:753041.1 to diagnose and fix invalid components. Do this always BEFORE you start the upgrade. Even if it may take some time. Otherwise your upgrade could fails or significant parts of the database packages could be invalid after the upgrade as well. I just came across this today as one group had ~240 invalid objects in the database - and due to the fact that the original system was still there could proof that the objects had been invalid before. Good job, BUT ... :-)

    Read the article

< Previous Page | 702 703 704 705 706 707 708 709 710 711 712 713  | Next Page >