Search Results

Search found 42629 results on 1706 pages for 'dry run'.

Page 345/1706 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • That’s a wrap! Almost, there’s still one last chance to attend a SQL in the City event in 2012

    - by Red and the Community
    The communities team are back from the SQL in the City multi-city US Tour and we are delighted to have met so many happy SQL Server professionals and Red Gate customers. We set out to run a series of back-to-back events in order to meet, talk to and delight as many SQL Server and Red Gate enthusiasts as possible in 5 different cities in 11 days. We did it! The attendees had a good time too and 99% of them would attend another SQL in the City event in 2013 – so it seems we left an impression. There were a range of topics on the event agenda, ranging from ‘The Whys & Hows of Continuous Integration’, ‘Database Maintenance Essentials’, ‘Red Gate tools – The Complete Lifecycle’, ‘Automated Deployment: Application And Database Releases Without The Headache’, ‘The Ten Commandments of SQL Server Monitoring’ and many more. Videos and slides from the events will be posted to the event website in November, after our last event of 2012. SQL in the City Seattle – November 5 Join us for free and hear from some of the very best names in the SQL Server world. SQL Server MVPs such as; Steve Jones, Grant Fritchey, Brent Ozar, Gail Shaw and more will be presenting at the Bell Harbor conference center for one day only. We’re even taking on board some of the recent attendee-suggestions of how we can improve the events (feedback from the 65% of attendees who came to our US tour events), first off we’re extending the drinks celebration in the evening! Rather than just a 30 minute drink and run, attendees will have up to 2 hours to enjoy free drinks, relax and network in a fantastic environment amongst some really smart like-minded professionals. If you’re interested in expanding your SQL Server knowledge, would like to learn more about Red Gate tools, get yourself registered for the last SQL in the City event of 2012. It’s free, fun and we’re very friendly! I look forward to seeing you in Seattle on Monday November 5. Cheers, Annabel.

    Read the article

  • The Oracle Linux Advantage

    - by Monica Kumar
    It has been a while since we've summed up the Oracle Linux advantage over other Linux products. Wim Coekaerts' new blog entries prompted me to write this article. Here are some highlights. Best enterprise Linux - Since launching UEK almost 18 months ago, Oracle Linux has leap-frogged the competition in terms of the latest innovations, better performance, reliability, and scalability. Complete enterprise Linux solution: Not only do we offer an enterprise Linux OS but it comes with management and HA tools that are integrated and included for free. In addition, we offer the entire "apps to disk" solution for Linux if a customer wants a single source. Comprehensive testing with enterprise workloads: Within Oracle, 1000s of servers run incredible amount of QA on Oracle Linux amounting to100,000 hours everyday. This helps in making Oracle Linux even better for running enterprise workloads. Free binaries and errata: Oracle Linux is free to download including patches and updates. Highest quality enterprise support: Available 24/7 in 145 countries, Oracle has been offering affordable Linux support since 2006. The support team is a large group of dedicated professionals globally that are trained to support serious mission critical environments; not only do they know their products, they also understand the inter-dependencies with database, apps, storage, etc. Best practices to accelerate database and apps deployment: With pre-installed, pre-configured Oracle VM Templates, we offer virtual machine images of Oracle's enterprise software so you can easily deploy them on Oracle Linux. In addition, Oracle Validated Configurations offer documented tips for configuring Linux systems to run Oracle database. We take the guesswork out and help you get to market faster. More information on all of the above is available on the Oracle Linux Home Page. Wim Coekaerts did a great job of detailing these advantages in two recent blog posts he published last week. Blog article: Oracle Linux components http://bit.ly/JufeCD Blog article: More Oracle Linux options: http://bit.ly/LhY0fU These are must reads!

    Read the article

  • puppet rspec no such file to load -- rspec-puppet (LoadError)

    - by Vorsprung
    I have no prior experience at all of ruby. I am not interested in ruby (and so have no knowledge of rails etc) as such but am using puppet to manage a group of servers. I have written some modules and the rspec-puppet system looks like it would be very useful. However, I cannot get rspec-puppet to work I am using Ubuntu LTS 10.04 I have installed puppet rspec using the directions on their web page What I actually did apt-get install rubygems # (installs 1.8) gem install rspec-expectations gem install rspec-puppet I also installed librspec-ruby1.8 Then I ran rspec-puppet-init in a puppet module directory I'd already made (it's a working puppet module) I made a file as defined in the tutorial $ more spec/defines/rule_spec.rb require 'spec_helper' describe 'vanusers::rule' do let(:title) { 't1' } it { should contain_class('vanusers::JamieA') } end but when I try and run it there is a mysterious dependancy issue $ spec spec/defines/rule_spec.rb /home/jamie/git/puppet/modules/vanusers/spec/spec_helper.rb:1:in `require': no such file to load -- rspec-puppet (LoadError) from /home/jamie/git/puppet/modules/vanusers/spec/spec_helper.rb:1 from ./spec/defines/rule_spec.rb:1:in `require' from ./spec/defines/rule_spec.rb:1 from /usr/lib/ruby/1.8/spec/runner/example_group_runner.rb:15:in `load' from /usr/lib/ruby/1.8/spec/runner/example_group_runner.rb:15:in `load_files' from /usr/lib/ruby/1.8/spec/runner/example_group_runner.rb:14:in `each' from /usr/lib/ruby/1.8/spec/runner/example_group_runner.rb:14:in `load_files' from /usr/lib/ruby/1.8/spec/runner/options.rb:132:in `run_examples' from /usr/lib/ruby/1.8/spec/runner/command_line.rb:9:in `run' from /usr/bin/spec:3 Here is the solution I came up with in the end:: apt-get install rubygems gem install rspec-expectations rspec-puppet puppet-lint puppetlabs_spec_helper so your path picks up the gem stuff export PATH=/var/lib/gems/1.8/bin:$PATH cd into module and rm spec/spec_helper.rb rspec-puppet-init replace Rakefile with require 'rake' require 'rspec/core/rake_task' require 'puppetlabs_spec_helper/rake_tasks' Then "rake spec" to run tests or "rake lint" to check files http://sysadvent.blogspot.co.uk/2013/12/day-22-getting-started-testing-your.html was an excellent source of info

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • DBA Command line options

    - by Anthony Shorten
    There are a number of database utilities supplied with the installation of the Oracle Utilities Application Framework based products. These are typically run in interactive mode where the utility prompts you for the values and then executes the required functionality. Did you know that the utilities also have command line options that allow you to run the utility in silent mode as well? You can assess the command line options by specifying the -h option on the command line. Here is an example of the oragensec command line options: oragensec -d <Owner,OwnerPswd,DbName> -u <Database Users> -r <ReadRole,UserRole> -l <logfile> -h where: -d <Owner,OwnerPswd,DbName> Database connect information for the target database. e.g. spladm,spladm,DB200ODB. -u <Database Users> A comma-separated list of database users where synonyms need to be created. e.g. spluser, splread -r <ReadRole,UserRole> Optional. Names of database roles with read and read-write privileges. Default roles are SPL_READ, SPL_USER. e.g. spl_read,spl_user -l <logfile> Optional. Name of the log file. -h Help The command line options allow the DBA to automate the exeucution either via a script or some utility can than execute utilities. This optin can apply to the majority of DBA utilities supplied with the product. Take a look at others.

    Read the article

  • Getting Started with Employee Info Starter Kit (v4.0.0)

    - by joycsharp
    The new release of Employee Info Starter Kit contains lots of exciting features available in Visual Studio 2010 and .NET 4.0. To get started with the new version, you will need less than 5 minutes. Minimum System Requirements Before getting started, please make sure you have installed Visual Studio 2010 RC (or higher) and Sql Server 2005 Express edition (or higher installed on your machine. Running the Starter Kit for First Time 1. Download the starter kit 4.0.0 version form here and extract it. 2. Go to <extraction folder>\Source\Eisk.Solution and click the solution file 3. From the solution explorer, right click the “Eisk.Web” web site project node and select “Set as Startup Project” and hit Ctrl + F5   4. You will be prompted to install database, just follow the instruction. That’s it! You are ready to use this starter kit. Running the Tests Employee Info Starter Kit contains a infrastructure for Integration and Unit Testing, by utilizing cool test tools in Visual Studio 2010. Once you complete the steps, mentioned above, take a minute to run the test cases on the fly. 1. From the solution explorer, to go “Solution Items\e-i-s-k-2010.vsmdi” and click it. You will see the available Tests in the Visual Studio Test Lists. Select all, except the “Load Tests” node (since Load Tests takes a bit time) 2. Click “Run Checked Tests” control from the upper left corner. You will see the tests running and finally the status of the tests, which indicates the current health of you application from different scenarios. Technorati Tags: asp.net,architecture,starter kit,employee info starter kit,visual studio 2010,.net 4.0,entity framework

    Read the article

  • Getting Started with Employee Info Starter Kit (v4.0.0)

    - by Mohammad Ashraful Alam
    The new release of Employee Info Starter Kit contains lots of exciting features available in Visual Studio 2010 and .NET 4.0. To get started with the new version, you will need less than 5 minutes. Minimum System Requirements Before getting started, please make sure you have installed Visual Studio 2010 RC (or higher) and Sql Server 2005 Express edition (or higher installed on your machine. Running the Starter Kit for First Time 1. Download the starter kit 4.0.0 version form here and extract it. 2. Go to <extraction folder>\Source\Eisk.Solution and click the solution file 3. From the solution explorer, right click the “Eisk.Web” web site project node and select “Set as Startup Project” and hit Ctrl + F5   4. You will be prompted to install database, just follow the instruction. That’s it! You are ready to use this starter kit. Running the Tests Employee Info Starter Kit contains a infrastructure for Integration and Unit Testing, by utilizing cool test tools in Visual Studio 2010. Once you complete the steps, mentioned above, take a minute to run the test cases on the fly. 1. From the solution explorer, to go “Solution Items\e-i-s-k-2010.vsmdi” and click it. You will see the available Tests in the Visual Studio Test Lists. Select all, except the “Load Tests” node (since Load Tests takes a bit time) 2. Click “Run Checked Tests” control from the upper left corner. You will see the tests running and finally the status of the tests, which indicates the current health of you application from different scenarios. Technorati Tags: asp.net,architecture,starter kit,employee info starter kit,visual studio 2010,.net 4.0,entity framework

    Read the article

  • Errors Code: /var/cache/apt/archives/linux-image-3.8.0-19-generic_3.8.0-19.30~precise1_amd64.deb

    - by user286682
    $ sudo apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following packages will be upgraded: linux-image-3.8.0-19-generic 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 6 not fully installed or removed. Need to get 0 B/47.8 MB of archives. After this operation, 142 MB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 164064 files and directories currently installed.) Preparing to replace linux-image-3.8.0-19-generic 3.8.0-19.29 (using .../linux-image-3.8.0-19-generic_3.8.0-19.30~precise1_amd64.deb) ... Done. Unpacking replacement linux-image-3.8.0-19-generic ... dpkg: error processing /var/cache/apt/archives/linux-image-3.8.0-19-generic_3.8.0-19.30~precise1_amd64.deb (--unpack): trying to overwrite '/lib/modules/3.8.0-19-generic/kernel/arch/x86/kvm/kvm-intel.ko', which is also in package linux-image-extra-3.8.0-19-generic 3.8.0-19.29 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.8.0-19-generic /boot/vmlinuz-3.8.0-19-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.8.0-19-generic /boot/vmlinuz-3.8.0-19-generic Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.8.0-19-generic_3.8.0-19.30~precise1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) How can I get this update to work?

    Read the article

  • Ubuntu 13.04 running really slow and Hanging

    - by CAM
    Up till recently I have been running 13.04 on my laptop very happily. This morning however, I turned on my laptop to find it running really slow. Takes 5 min to load a program and even then the program freezes and I have had 3 system hangs this morning already. The Unity Desktop appears to run ok but programs do not. Things I have tried so far: Checking for Propitiatory graphics drivers - none shown available (I have bumblebee running already). Using the recovery boot options from Grub to repair broken packages. Recent changes - Updated computer, Installed some indicator applets which have worked fine for me before. System Specs: Asus U36s, Intel Core i5-2450M 2.5GHz, 4GB RAM, Nvidia Geforce 610M-1GB, Dual boot Win7 & Ubuntu 13.04 I'm a bit of a noob with Ubuntu but am happy enough running stuff in terminal if you will advise me on what to run. I'm just a bit stuck on what do to fix this without a reinstall. Thanks a lot for your help.

    Read the article

  • Display Song Lyrics in Windows Media Player with Lyrics Plugin

    - by DigitalGeekery
    Looking for a way to display song lyrics in Windows Media Player? Today we look at a very simple method to accomplish this with Lyrics Plugin for Windows Media Player. Download and run the Lyrics Plugin install. (See download link below) When the installation is finished you’ll be prompted to run Windows Media Player. Click Yes. Begin playing your song or playlist then switch to Now Playing mode.   You should now see the full song lyrics of the currently playing track. To toggle the lyrics on and off, select Tools from the Menu in Library view, choose Plug-ins, and click Lyrics Plugin. If you don’t see the Menu bar, you can enable it by going to Organize, Layout, and Show Menu Bar. When Lyrics Plugin is turned off, Windows Media Player will switch back to it’s default visualization.   Whether you just want to know the lyrics or you’d like to hone your karaoke chops, Lyrics Plugin makes a nice addition to Windows Media Player 12. Download Lyrics Plugin for Windows Media Player 12. Similar Articles Productive Geek Tips Install and Use the VLC Media Player on Ubuntu LinuxInstalling Windows Media Player Plugin for FirefoxFixing When Windows Media Player Library Won’t Let You Add FilesQuickly Preview Songs in Windows Media Player 12 in Windows 7Foobar2000 is a Fully Customizable Music Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff

    Read the article

  • PHP won't load php.ini

    - by Chuck
    I am racking my brain here and I must be doing something really stupid. I'm trying to setup PHP on Win2008 R2/IIS 7.5. I unpacked php538 into c:\php538 and renamed the php.ini-development to php.ini. Then i tried going to a command prompt and running: c:\php358\php -info I get: Configuration File (php.ini) Path => C:\windows Loaded Configuration File => (none)Scan this dir for additional .ini files => (none) Additional .ini files parsed => (none) Loaded Configuration File => (none) Scan this dir for additional .ini files => (none) Additional .ini files parsed => (none) I have tried using php5217. I tried putting php.ini in c:\windows. I tried creating the PHPRC envrionment variable and pointing it to c:\php358. Every time I have the same problem. PHP does not find or load the ini file. If I run: c:\php358\php -c php.ini -info Then it will load the file. But I shouldn't have to do this for PHP to find the file in the same directory, in the Windows directory, or using the environment variable, so I'm stumped. When I try to run PHP from IIS I get a 500 error and I can only assume at this time it is because it can't find and load the php.ini file correctly. I see similar questions on here, but none seem to address the problem I am having.

    Read the article

  • Tween Animation Cannot Start

    - by David Dimalanta
    Do you have any reasons why my tween code didn't run or work? I already add the tween engine onto the library folder under LibGDX project folder and "Order and Export" it under Java Build Path at the Properties menu. My first two classes ran correctly and workly but my third class didn't work. Here's the sequence: First class is the first screen. Fade animation works on the company's logo. Second class is the second screen. Fade animation for the loading screen works. Third class is the third screen. After the second screen, now calls for the third screen. Animation stopped or won't run since I want the black screen to fade out at the start when the menu is here. Can you check if I did right? Look carefully by comment lines for explanation. //-----[ Animation Setup ]----- Tween.registerAccessor(Sprite.class, new Tween_Animation()); // --> Tween_Animation.java Tween_Manager = new TweenManager(); // --> I initialized it the TweenManager and seems okay. cb_start = new TweenCallback() // --> I'll use this when I choose START and the menu will fade in black. { @Override public void onEvent(int arg0, BaseTween<?> arg1) { goTo(); } }; Tween // --> This is where I focused the problem. .to(black_Sprite, Tween_Animation.ALPHA, 3f) .target(1) .ease(TweenEquations.easeInQuad) .repeatYoyo(200, 2.5f) // --> I set the repeat for 200 times when I noticed that the animation won't work! .start(Tween_Manager);

    Read the article

  • September 2012 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I’m excited to announce the September 2012 release of the Ajax Control Toolkit! This is the first release of the Ajax Control Toolkit which supports the .NET 4.5 framework. We also continue to support ASP.NET 3.5 and ASP.NET 4.0. With this release, we’ve made several important bug fixes. The Superexpert team focused on fixing the highest voted issues associated with the CascadingDropDown control. I’ve created a list of these bug fixes later in this blog post. You can download the latest release of the Ajax Control Toolkit by visiting the following page at CodePlex: http://AjaxControlToolkit.CodePlex.com Alternatively, you can install the latest version of the Ajax Control Toolkit using NuGet by firing off the following command from the Package Manager Console: Install-Package AjaxControlToolkit Using the Ajax Control Toolkit with ASP.NET 4.5 Let me walk through the steps for using the Ajax Control Toolkit with ASP.NET 4.5. First, I’ll create a new ASP.NET 4.5 website with Visual Studio 2012. I’ll create the new website with the ASP.NET Web Forms Application template: When you create a new ASP.NET 4.5 site with the ASP.NET Web Forms Application template, you get a starter website. If you run the site, then you get a page with default content: Let me show you how you can add the Ajax Control Toolkit Calendar control to the homepage of this starter site. The first step is to use NuGet to install the Ajax Control Toolkit. Right-click the References folder in the Solution Explorer window and select the menu option Manage NuGet Packages. In the Manage NuGet Packages dialog, use the search box to search for the Ajax Control Toolkit (enter “AjaxControlToolkit”). After you find it, click the Install button to add the Ajax Control Toolkit to your project. That’s all you have to do to install the Ajax Control Toolkit! Now we are ready to start using the Ajax Control Toolkit controls. Open the default.aspx page so we can modify the contents of the page. Erase everything contained in the Content control with the ID of BodyContent. After erasing the content, declare the following two controls: <asp:TextBox ID="vacationDate" runat="server" /> <ajaxToolkit:CalendarExtender TargetControlID="vacationDate" runat="server" /> The first control is a standard ASP.NET TextBox control and the second control is an Ajax Control Toolkit Calendar control. You should get intellisense as you type out the Ajax Control Toolkit Calendar control. If you don’t, then close and re-open the Default.aspx page. Now, let’s run our app. Hit the F5 button or select Debug, Start Debugging from the Visual Studio menu. You will get the error message “MsAjaxBundle is not a valid script name”. Don’t despair! We need to update the Master Page so it uses the ToolkitScriptManager instead of the default ScriptManager. Open the Site.Master file and find where the ScriptManager is declared. The ScriptManager should look like this: <asp:ScriptManager runat="server"> <Scripts> <%--Framework Scripts--%> <asp:ScriptReference Name="MsAjaxBundle" /> <asp:ScriptReference Name="jquery" /> <asp:ScriptReference Name="jquery.ui.combined" /> <asp:ScriptReference Name="WebForms.js" Assembly="System.Web" Path="~/Scripts/WebForms/WebForms.js" /> <asp:ScriptReference Name="WebUIValidation.js" Assembly="System.Web" Path="~/Scripts/WebForms/WebUIValidation.js" /> <asp:ScriptReference Name="MenuStandards.js" Assembly="System.Web" Path="~/Scripts/WebForms/MenuStandards.js" /> <asp:ScriptReference Name="GridView.js" Assembly="System.Web" Path="~/Scripts/WebForms/GridView.js" /> <asp:ScriptReference Name="DetailsView.js" Assembly="System.Web" Path="~/Scripts/WebForms/DetailsView.js" /> <asp:ScriptReference Name="TreeView.js" Assembly="System.Web" Path="~/Scripts/WebForms/TreeView.js" /> <asp:ScriptReference Name="WebParts.js" Assembly="System.Web" Path="~/Scripts/WebForms/WebParts.js" /> <asp:ScriptReference Name="Focus.js" Assembly="System.Web" Path="~/Scripts/WebForms/Focus.js" /> <asp:ScriptReference Name="WebFormsBundle" /> <%--Site Scripts--%> </Scripts> </asp:ScriptManager> We need to make three changes to the ScriptManager: 1) We need to replace the asp:ScriptManager with the ajaxToolkit:ToolkitScriptManager 2) We need to remove the MsAjaxBundle bundle from the ScriptReferences 3) We need to remove the Assembly=”System.Web” attributes from the ScriptReferences After you make these three changes, the ToolkitScriptManager should looks like this: <ajaxToolkit:ToolkitScriptManager runat="server"> <Scripts> <%--Framework Scripts--%> <asp:ScriptReference Name="jquery" /> <asp:ScriptReference Name="jquery.ui.combined" /> <asp:ScriptReference Name="WebForms.js" Path="~/Scripts/WebForms/WebForms.js" /> <asp:ScriptReference Name="WebUIValidation.js" Path="~/Scripts/WebForms/WebUIValidation.js" /> <asp:ScriptReference Name="MenuStandards.js" Path="~/Scripts/WebForms/MenuStandards.js" /> <asp:ScriptReference Name="GridView.js" Path="~/Scripts/WebForms/GridView.js" /> <asp:ScriptReference Name="DetailsView.js" Path="~/Scripts/WebForms/DetailsView.js" /> <asp:ScriptReference Name="TreeView.js" Path="~/Scripts/WebForms/TreeView.js" /> <asp:ScriptReference Name="WebParts.js" Path="~/Scripts/WebForms/WebParts.js" /> <asp:ScriptReference Name="Focus.js" Path="~/Scripts/WebForms/Focus.js" /> <asp:ScriptReference Name="WebFormsBundle" /> <%--Site Scripts--%> </Scripts> </ajaxToolkit:ToolkitScriptManager> After we make these changes, the app should run successfully. You’ll get a page which contains a text field. When you click inside the text field, a popup calendar is displayed. Ajax Control Toolkit and jQuery You might have noticed that the ScriptManager includes a reference to jQuery by default. We did not remove that reference when we converted the ScriptManager to a ToolkitScriptManager. You can use the Ajax Control Toolkit and jQuery side-by-side. Here’s how you can modify the Default.aspx page so that it contains two popup calendars. The first popup calendar is created with the Ajax Control Toolkit and the second popup calendar is created with jQuery: <asp:TextBox ID="vacationDate" runat="server" /> <ajaxToolkit:CalendarExtender TargetControlID="vacationDate" runat="server" /> <input id="birthDate" /> <script> $("#birthDate").datepicker(); </script> Before you can start using jQuery UI plugins, you need to complete one more step. You need to add the jQuery UI themes bundle to the HEAD of the Site.Master page like this: <head runat="server"> <meta charset="utf-8" /> <title><%: Page.Title %> - My ASP.NET Application</title> <asp:PlaceHolder runat="server"> <%: Scripts.Render("~/bundles/modernizr") %> </asp:PlaceHolder> <webopt:BundleReference runat="server" Path="~/Content/css" /> <webopt:BundleReference runat="server" Path="~/Content/themes/base/css" /> <link href="~/favicon.ico" rel="shortcut icon" type="image/x-icon" /> <meta name="viewport" content="width=device-width" /> <asp:ContentPlaceHolder runat="server" ID="HeadContent" /> </head> The markup above includes a reference to the jQuery UI themes bundle: <webopt:BundleReference runat="server" Path="~/Content/themes/base/css" /> Now that we have made these changes, we can use the Ajax Control Toolkit and jQuery at the same time. When you run your app, you get two popup calendars. When you click in the first text field, the Ajax Control Toolkit calendar appears. When you click in the second text field, the jQuery UI popup calendar appears: Bug Fixes in this Release We made several important bug fixes with this release of the Ajax Control Toolkit and integrated several Pull Requests contributed by the community. Our primary focus during this sprint was fixing issues with the CascadingDropDown control. We fixed the following issues associated with the CascadingDropDown: · 9490 – Don’t disable dropdowns in CascadingDropDown · 14223 – CascadingDropDown Reset or Setting SelectedValue from WebMethod · 12189 – CascadingDropDown not obeying disabled state of DropDownList · 22942 – CascadingDropDown infinite loop (with solution) · 8671 – CascadingDropdown options is null or undefined · 14407 – CascadingDropDown: populated client event happens too often · 17148 – CascadingDropDown – Add “UseHttpGet” property · 10221 – No NotNull check in CascadingDropDown · 12228 – Provide property for case-insensitive DefaultValue lookup in CascadingDropdown We also fixed the following two issues which are not directly related to the CascadingDropDown control: · 27108 – CalendarExtender: Bug when selecting December shifts to January. · 27041 – Input controls with HTML5 types do not post back in Firefox, Chrome, Safari Finally, we integrated several Pull Requests submitted by the community (Thank you community!): · Added French localized resources for the AjaxFileUpload · Resolved an issue which prevented the AjaxFileUpload control from working with pages that require query string variables. · Extended the AjaxFileUploadEventArgs class to include the current file index in the queue and the total number of files in the queue. · Fixed an issue with TabContainer and TabPanel which caused the OnActiveTabChanged event to fire too often. Summary I’m happy to see the Ajax Control Toolkit move forward into the brave new world of ASP.NET 4.5! In this latest release, we focused on ensuring that the Ajax Control Toolkit works smoothly with ASP.NET 4.5 applications. We also fixed the highest voted bugs associated with the CascadingDropDown control and integrated several Pull Request submitted by the community. Once again, I want to thank the Superexpert team for their hard work on this release!

    Read the article

  • Quick guide to Oracle IRM 11g: Server installation

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index This is the first of a set of articles designed to assist with the successful installation, configuration and deployment of a document security solution using Oracle IRM. This article goes through a set of simple instructions which detail how to download, install and configure the IRM server, the starting point for building a document security solution. This article contains a subset of information from the official documentation and is focused on installing the server on Oracle Enterprise Linux. If you are planning to deploy on a non-Linux platform, you will need to reference the documentation for platform specific information. Contents Introduction Downloading the software Preparing a database Creating the schema WebLogic Server installation Installing Oracle IRM Introduction Because we are using Oracle Enterprise Linux in this guide, and before we get into the detail of IRM, i'd like to share some tips with Linux to make life a bit easier.Use a 64bit platform, IRM 11g runs just fine on a 32bit server but with 64bit you will build a more future proof service. Download and install the latest Java JDK package. Make sure you get the 64bit version if you are on a 64bit server. Configure Linux to use a good Yum server to simplify installing packages. For Oracle Enterprise Linux we maintain a great public Yum here. Have at least 20GB of free disk space on the partition you intend to install the IRM server. The downloads are big, then you extract them and then install. This quickly consumes disk space which you can easily recover by deleting the downloaded and extracted files after wards. But it's nice to have the disk space spare to keep these around in case you need to restart any part of the installation process again. Downloading the software OK, so before you can do anything, you need the software install kits. Luckily Oracle allows you to freely download every technology we create. You'll need to get the following; Oracle WebLogic Server Oracle Database Oracle Repository Creation Utility (rcu) Oracle IRM server You can use Microsoft SQL server 2005 or 2008, in this guide i've used Oracle RDBMS 11gR2 for Linux. Preparing the database I'm not going to go through the finer points of installing the database. There are many very good guides on installing the Oracle Database. However one thing I would suggest you think about is enabling TDE, network encryption and using Database Vault. These Oracle database security technologies are excellent for creating a complete end to end security solution. No point in going to all the effort to secure document access with IRM when someone can go directly to the database and assign themselves rights to documents. To understand this further, you can see a video of the IRM service using these database security technologies here. With a database up and running we need to create a schema to hold the IRM data. This schema contains the rights model, cryptographic keys, user account id's and associated rights etc. Creating the IRM database schema Oracle uses the Repository Creation Tool which builds your schema, extract the files from the rcu zip. Then in a terminal window; cd /oracle/install/rcu/bin ./rcu This will launch the Repository Creation Tool and you will be presented with the image to the right. Hit next and continue onto the next dialog. You are asked if you are going to be creating a new schema or wish to drop an existing one, you obviously just need to click next at this point to create a new schema. The RCU next needs to know where your database is so you'll need the following details of your database instance. Below, for reference, is the information for my installation. Hostname: irm.oracle.demo Port: 1521 (This is the default TCP port for the Oracle Database) Service Name: irm.oracle.demo. Note this is not the SID, but the service name. Username: sys Password: ******** Role: SYSDBA And then select next. Because the RCU contains schemas for many of the Oracle Technologies, you now need to select to just deploy the Oracle IRM schema. Open the section under "Enterprise Content Management" and tick the "Oracle Information Rights Management" component. Note that you also get the chance to select a prefix which defaults to "DEV" (for development). I usually change this to something that reflects my own install. PROD for a production system, INT for internal only etc. The next step asks for the passwords for the schema users. We are only creating one schema here so you just enter one password. Some brave souls store this password in an Excel spreadsheet which is then secure against the IRM server you're about to install in this guide. Nearing the end of the schema creation is the mapping of the tablespaces to the schema. Note I had setup a table space already that was encrypted using TDE and at this point I was able to select that tablespace by clicking in the "Default Tablespace" column. The next dialog confirms your actions and clicking on next causes it to create the schema and default data. After this you are presented with the completion summary. WebLogic Server installation The database is now ready and the next step is to install the application server. Oracle IRM 11g is a JEE application and currently only supported in Oracle WebLogic Server. So the next step is get WebLogic Server installed, which is pretty easy. Depending on the version you download, you either run the binary or for a 64 bit platform (like mine) run the following command. java -d64 -jar wls1033_generic.jar And in the resulting dialog hit next to start walking through the install. Next choose a directory into which you will install WebLogic Server. I like to change from the default and install into /oracle/. Then all my software goes into this one folder, all owned by the "oracle" user. The next dialog asks for your Oracle support information to ensure you are kept up to date. If you have an Oracle support account, enter your details but for most evaluation systems I leave these fields blank. Again, for evaluation or development systems, I usually stick with the "Typical" install type which you are next asked for. Next you are asked for the JDK which will be used for the server. When installing from the generic jar on a 64bit platform like in this guide, no JDK is bundled with the installer. But as you can see in the image on the right, that it does a good job of detecting the one you've got installed. Defaults for the install directories are usually taken, no changes here, just click next. And finally we are ready to install, hit next, sit back and relax. Typically this takes about 10 minutes. After the install, do not run the quick start, we need to deploy the IRM install itself from which we will create a new WebLogic domain. For now just hit done and lets move to the final step of the installation process. Installing Oracle IRM The last piece of the puzzle to getting your environment ready is to deploy the IRM files themselves. Unzip the Oracle Enterprise Content Management 11g zip file and it will create a Disk1 directory. Switch to this folder and in the console run ./runInstaller. This will launch the installer which will also ask for the location of the JDK. Look at the image on the right for the detail. You should now see the first stage of the IRM installation. The dialog warns you need to have a WebLogic server installed and have created the schema's, but you've just done all that above (I hope) so we are ready to go. The installer now checks that you have all the required libraries installed and other system parameters are correct. Because nearly all of my development and evaluation installations have the database server on the same system, the installer passes these checks without issue... Next... Now chose where to install the IRM files, you must install into the same Middleware Home as the WebLogic Server installation you just performed. Usually the installer already defaults to this location anyway. I also tend to change the Oracle Home Directory to Oracle_IRM so it's clear this is just an IRM install. The summary page tells you about space needed to deploy the files. Unfortunately the IRM install comes with all of the other Oracle ECM software, you can't just select the IRM files, everything gets deployed to disk and uses 1.6GB of space! Not fun, but Oracle has to package up similar technologies otherwise we would have a very large number of installers to QA and manage, again, not fun. Hit Install, time for another drink, maybe a piece of cake or a donut... on a half decent system this part of the install took under 10 minutes. Finally the installation of your IRM server is complete, click on finish and the next phase is to create the WebLogic domain and start configuring your server. Now move onto the next article in this guide... configuring your IRM server ready to seal your first document.

    Read the article

  • Five Bucks says you’ll Bookmark this Site: jsFiddle.net

    - by SGWellens
    In my never-ending wandering of technical web sites, I've been encountering links to jsFiddle.net more and more. Why? Because it is an incredibly useful site: It is a great 'sandbox' to play in. You can test, modify and retest HTML, CSS, and JavaScript code. It is a great way to communicate technical issues and share code samples. There are four screen areas: Three inputs* and one output: The three inputs are: HTML CSS JavaScript The output is: The rendered result Here's a cropped screen shot: What am I thinking? Here's the actual page: Demo *There are other inputs. You can select the level of HTML you want to run against (HTM5, HTML4.01 Strict, etc). You can add various versions of JavaScript libraries (jQuery, MooTools, YUI, etc.). Many other options are available. If I wanted to share this code with someone manually, they would have to copy and paste three separate code chunks into their development environment. And maybe load some external libraries. Not many people are willing to make such an effort. Instead, with jsFiddler, they can just go to the link and click Run. Awesome. I hope someone finds this useful (and I was kidding about the five bucks). Steve Wellens CodeProject

    Read the article

  • How To Manage Your Remote Desktop Connections Easily

    - by Gopinath
    If you regularly access PCs using Microsoft Remote Desktop Connection, here is an nice utility to make your life easier. Remote Desktop Organizer is a freeware application that allows you to easily organize your multiple remote desktop connection in one place. It has many useful features(we run them down after the break) but my favorites are the ability to organize & save connection details and the ease at which it allows to switch between multiple connections. The above screen grab of the applications shows how well we can save & organize multiple connection by creating folders hierarchy and also multiple Remote Connections in one window for easy switching. These two features are huge time savers to me as I often connect to multiple servers and switch between them. The complete list of features as given by the official website of the freeware Organize remote desktop connections in folders and subfolders Drag and drop support for moving connections and folders Tabbed connections Quick Connection Connect to console Change connection port Minimize to system tray (optional) Close to system tray (optional) To run this application you need Microsoft .NET Framework 2.0 or higher installed on your PC. Download Remote Desktop Organizer Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Problems Running GTA San Anread under wine

    - by Samyon Sahnovitch
    Although I can actually run San Andreas under wine first of all this is the menu: Well when I press the first menu item, I can play but there are some weird "bugs" with the graphics. The sky turns black, some 3d figures appearing on the screen etc. + the game is very slow. Intel built in graphic card so drivers are built in as well. When I run it on Windows - same computer everything works fine. Output from lspci if it will help. 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03) 00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 03) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.3 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 4 port SATA Controller [AHCI mode] (rev 03) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 03) 00:1f.6 Signal processing controller: Intel Corporation 82801I (ICH9 Family) Thermal Subsystem (rev 03) 02:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02)

    Read the article

  • The dislikes of TDD

    - by andrewstopford
    I enjoy debates about TDD and Brian Harrys blog post is no exception. Brian sounds out what he likes and dislikes about TDD and it's the dislikes I'll focus on. The idea of having unit tests that cover virtually every line of code that I’ve written that I have to refactor every time I refactor my code makes me shudder.  Doing this way makes me take nearly twice as long as it would otherwise take and I don’t feel like I get sufficient benefits from it. Refactoring your tests to match your refactored code sounds like the tests are suffering. Too many hard dependencies with no SOLID concerns are a sure fire reason you would do this. Maybe at the start of a TDD cycle you would need to do this as your design evolves and you remove these dependencies but this should quickly be resolved as you refactor. If you find your self still doing it then stop and look back at your design. Don’t get me wrong, I’m a big fan of unit tests.  I just prefer to write them after the code has stopped shaking a bit.  In fact most of my early testing is “manual”.  Either I write a small UI on top of my service that allows me to plug in values and try it or write some quick API tests that I throw away as soon as I have validated them. The problem with this is that a UI can make assumptions on your code that then just unit test around and very quickly the design becomes bad and you technical debt sweeps in. If you want to blackbox test your code with a UI then do so after your TDD cycles not before. This is probably by biggest issue with a literal TDD interpretation.  TDD says you never write a line of code without a failing test to show you need it.  I find it leads developers down a dangerous path.  Without any help from a methodology, I have met way too many developers in my life that “back into a solution”.  By this, I mean they write something, it mostly works and they discover a new requirement so they tack it on, and another and another and when they are done, they’ve got a monstrosity of special cases each designed to handle one specific scenario.  There’s way more code than there should be and it’s way too complicated to understand. I believe in finding general solutions to problems from which all the special cases naturally derive rather than building a solution of special cases.  In my mind, to do this, you have to start by conceptualizing and coding the framework of the general algorithm.  For me, that’s a relatively monolithic exercise. TDD is an development pratice not a methodology, the danger is that the solution becomes a mass of different things that violate DRY. TDD won't solve these problems, only good communication and practices like pairing will help. Above all else an assumption that TDD replaces a methodology is a mistake, combine it with what ever works for your team\business but only good communication will help. A good naming scheme\structure for folders, files and tests can help you and your team isolate what tests are for what.

    Read the article

  • Fixing my SQL Directory NTFS ACLS

    - by Shawn Cicoria
    I run my development server by boot to VHD (Windows Server 2008 R2 x64).  In that instance, I also have an attached VHD (I attach via script at boot up time using Task Scheduler).  That VHD I have my SQL instances installed. So, the other day, acting hasty, I chmod my ACLS – wow, what a day after that. So, in order to fix it I created this set of BAT commands that resets it back to operational state – not 100% of all what you get, I also didn’t want to run a “repair” – but, all operational again. setlocal SET Inst100Path=H:\Program Files\Microsoft SQL Server\100 REM GOTO SQLE SET InstanceName=MSSQLSERVER SET InstIdPath=H:\Program Files\Microsoft SQL Server\MSSQL10.%InstanceName% SET Group=SQLServerMSSQLUser$SCICORIA-HV1$%InstanceName% SET AgentGroup=SQLServerSQLAgentUser$SCICORIA-HV1$%InstanceName% ICACLS "%InstIdPath%\MSSQL" /T /Q /grant "%Group%":(OI)(CI)FX ICACLS "%InstIdPath%\MSSQL\backup" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\data" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\FTdata" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\Jobs" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\binn" /T /Q /grant "%Group%":(OI)(CI)RX ICACLS "%InstIdPath%\MSSQL\Log" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%Inst100Path%" /T /Q /grant "%Group%":(OI)(CI)RX ICACLS "%Inst100Path%\shared\Errordumps" /T /Q /grant "%Group%":(OI)(CI)RXW ICACLS "%InstIdPath%\MSSQL" /T /Q /grant "%AgentGroup%":(OI)(CI)RX ICACLS "%InstIdPath%\MSSQL\binn" /T /Q /grant "%AgentGroup%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\Log" /T /Q /grant "%AgentGroup%":(OI)(CI)F ICACLS "%Inst100Path%" /T /Q /grant "%AgentGroup%":(OI)(CI)RX REM THIS IS THE SQL EXPRESS INSTANCE :SQLE SET InstanceName=SQLEXPRESS SET InstIdPath=H:\Program Files\Microsoft SQL Server\MSSQL10.%InstanceName% SET Group=SQLServerMSSQLUser$SCICORIA-HV1$%InstanceName% SET AgentGroup=SQLServerSQLAgentUser$SCICORIA-HV1$%InstanceName% ICACLS "%InstIdPath%\MSSQL" /T /Q /grant "%Group%":(OI)(CI)FX ICACLS "%InstIdPath%\MSSQL\backup" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\data" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\FTdata" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\Jobs" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\binn" /T /Q /grant "%Group%":(OI)(CI)RX ICACLS "%InstIdPath%\MSSQL\Log" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%Inst100Path%" /T /Q /grant "%Group%":(OI)(CI)RX ICACLS "%Inst100Path%\shared\Errordumps" /T /Q /grant "%Group%":(OI)(CI)RXW ICACLS "%InstIdPath%\MSSQL" /T /Q /grant "%AgentGroup%":(OI)(CI)RX ICACLS "%InstIdPath%\MSSQL\binn" /T /Q /grant "%AgentGroup%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\Log" /T /Q /grant "%AgentGroup%":(OI)(CI)F ICACLS "%Inst100Path%" /T /Q /grant "%AgentGroup%":(OI)(CI)RX endlocal

    Read the article

  • What’s the use of code reuse?

    - by Tony Davis
    All great developers write reusable code, don’t they? Well, maybe, but as with all statements regarding what “great” developers do or don’t do, it’s probably an over-simplification. A novice programmer, in particular, will encounter in the literature a general assumption of the importance of code reusability. They spend time worrying about DRY (don’t repeat yourself), moving logic into specific “helper” modules that they can then reuse, agonizing about the minutiae of the class structure, inheritance and interface design that will promote easy reuse. Unfortunately, writing code specifically for reuse often leads to complicated object hierarchies and inheritance models that are anything but reusable. If, instead, one strives to write simple code units that are highly maintainable and perform a single function, in a concise, isolated fashion then the potential for reuse simply “drops out” as a natural by-product. Programmers, of course, care about these principles, about encapsulation and clean interfaces that don’t expose inner workings and allow easy pluggability. This is great when it helps with the maintenance and development of code but how often, in practice, do we actually reuse our code? Most DBAs and database developers are familiar with the practical reasons for the limited opportunities to reuse database code and its potential downsides. However, surely elsewhere in our code base, reuse happens often. After all, we can all name examples, such as date/time handling modules, which if we write with enough care we can plug in to many places. I spoke to a developer just yesterday who looked me in the eye and told me that in 30+ years as a developer (a successful one, I’d add), he’d never once reused his own code. As I sat blinking in disbelief, he explained that, of course, he always thought he would reuse it. He’d often agonized over its design, certain that he was creating code of great significance that he and other generations would reuse, with grateful tears misting their eyes. In fact, it never happened. He had in his head, most of the algorithms he needed and would simply write the code from scratch each time, refining the algorithms and tailoring the code to meet the specific requirements. It was, he said, simply quicker to do that than dig out the old code, check it, correct the mistakes, and adapt it. Is this a common experience, or just a strange anomaly? Viewed in a certain light, building code with a focus on reusability seems to hark to a past age where people built cars and music systems with the idea that someone else could and would replace and reuse the parts. Technology advances so rapidly that the next time you need the “same” code, it’s likely a new technique, or a whole new language, has emerged in the meantime, better equipped to tackle the task. Maybe we should be less fearful of the idea that we could write code well suited to the system requirements, but with little regard for reuse potential, and then rewrite a better version from scratch the next time.

    Read the article

  • How do I get KLIPS or NETKEY on 11.10 server?

    - by Incognito
    I'm attempting to run OpenSWAN on my Ubuntu11.10 server. All I've done so far is install openswan from the package manager and attempt to set up conf files. However, IPSec support seems to be broken, thus OpenSWAN can't do it's thing. Attempt to start IPSec $ sudo ipsec setup --start ipsec_setup: Starting Openswan IPsec 2.6.28... ipsec_setup: No KLIPS support found while requested, desperately falling back to netkey ipsec_setup: Even NETKEY support is not there, aborting Verify IPSec $ sudo ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.28/K(no kernel code presently loaded) Checking for IPsec support in kernel [FAILED] Checking that pluto is running [FAILED] whack: Pluto is not running (no "/var/run/pluto/pluto.ctl") Checking for 'ip' command [OK] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] IPSec Version $ sudo ipsec version Linux Openswan U2.6.28/K(no kernel code presently loaded) See `ipsec --copyright' for copyright information. Linux build: $ uname -a Linux metabox 2.6.18-028stab092.1 #1 SMP Wed Jul 20 19:47:12 MSD 2011 x86_64 x86_64 x86_64 GNU/Linux How can I go about correcting this problem with IPSec? This is a hosted VPS, and I'd like to avoid a kernel rebuild if I can find some other alternative.

    Read the article

  • Unable to uninstall or reinstall Ubuntu desktop

    - by sherwyngsw
    The uninstall-wubi option doesn't work. When I try reinstalling it they show an error and tells me to check "wubi 12.04 rev266" log Everything goes fine till the bottom which shows this There is another file or directory with this name. Please remove it before continuing. Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 81, in select_target_dir Exception: Cannot install into C:\ubuntu. There is another file or directory with this name. Please remove it before continuing. 05-25 15:20 DEBUG TaskList: # Cancelling tasklist 05-25 15:20 DEBUG TaskList: # Finished tasklist 05-25 15:20 ERROR root: Cannot install into C:\ubuntu. There is another file or directory with this name. Please remove it before continuing. Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 132, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 81, in select_target_dir Exception: Cannot install into C:\ubuntu. There is another file or directory with this name. Please remove it before continuing. What do I do? I've tried the uninstall wubi option but all it shows is "reinstall using recommended settings which doesn't do anything" okay i've tried installing it into another hardrive and i got this Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in call File "\lib\wubi\backends\win32\backend.py", line 117, in create_uninstaller File "\lib\wubi\backends\win32\registry.py", line 45, in set_value WindowsError: [Errno 5] Access is denied 05-26 16:12 DEBUG TaskList: # Cancelling tasklist 05-26 16:12 DEBUG TaskList: # Finished tasklist 05-26 16:12 ERROR root: [Errno 5] Access is denied Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 132, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in call File "\lib\wubi\backends\win32\backend.py", line 117, in create_uninstaller File "\lib\wubi\backends\win32\registry.py", line 45, in set_value WindowsError: [Errno 5] Access is denied

    Read the article

  • Trying to install apache 2.4.10 with openssl 1.0.1i

    - by AlexMA
    I need to install apache 2.4.10 using openssl 1.0.1i. I compiled openssl from source with: $ ./config \ --prefix=/opt/openssl-1.0.1e \ --openssldir=/opt/openssl-1.0.1e $ make $ sudo make install and apache with: ./configure --prefix=/etc/apache2 \ --enable-access_compat=shared \ --enable-actions=shared \ --enable-alias=shared \ --enable-allowmethods=shared \ --enable-auth_basic=shared \ --enable-authn_core=shared \ --enable-authn_file=shared \ --enable-authz_core=shared \ --enable-authz_groupfile=shared \ --enable-authz_host=shared \ --enable-authz_user=shared \ --enable-autoindex=shared \ --enable-dir=shared \ --enable-env=shared \ --enable-headers=shared \ --enable-include=shared \ --enable-log_config=shared \ --enable-mime=shared \ --enable-negotiation=shared \ --enable-proxy=shared \ --enable-proxy_http=shared \ --enable-rewrite=shared \ --enable-setenvif=shared \ --enable-ssl=shared \ --enable-unixd=shared \ --enable-ssl \ --with-ssl=/opt/openssl-1.0.1i \ --enable-ssl-staticlib-deps \ --enable-mods-static=ssl make (would run sudo make install next but I get an error) I'm essentially following the guide here except with newer slightly newer versions. My problem is I get a linker error when I run make for apache: ... Making all in support make[1]: Entering directory `/home/developer/downloads/httpd-2.4.10/support' make[2]: Entering directory `/home/developer/downloads/httpd-2.4.10/support' /usr/share/apr-1.0/build/libtool --silent --mode=link x86_64-linux-gnu-gcc -std=gnu99 -pthread -L/opt/openssl-1.0.1i/lib -lssl -lcrypto \ -o ab ab.lo /usr/lib/x86_64-linux-gnu/libaprutil-1.la /usr/lib/x86_64-linux-gnu/libapr-1.la -lm /usr/bin/ld: /opt/openssl-1.0.1i/lib/libcrypto.a(dso_dlfcn.o): undefined reference to symbol 'dlclose@@GLIBC_2.2.5' I tried the answer here, but no luck. I would prefer to just use aptitude, but unfortunately the versions I need aren't available yet. If anyone knows how to fix the linker problem (or what I think is a linker problem), or knows of a better way to tell apache to use a newer openssl, it would be greatly appreciated; I've got apache 1.0.1i working otherwise.

    Read the article

  • OWB 11gR2 - Early Arriving Facts

    - by Dawei Sun
    A common challenge when building ETL components for a data warehouse is how to handle early arriving facts. OWB 11gR2 introduced a new feature to address this for dimensional objects entitled Orphan Management. An orphan record is one that does not have a corresponding existing parent record. Orphan management automates the process of handling source rows that do not meet the requirements necessary to form a valid dimension or cube record. In this article, a simple example will be provided to show you how to use Orphan Management in OWB. We first import a sample MDL file that contains all the objects we need. Then we take some time to examine all the objects. After that, we prepare the source data, deploy the target table and dimension/cube loading map. Finally, we run the loading maps, and check the data in target dimension/cube tables. OK, let’s start… 1. Import MDL file and examine sample project First, download zip file from here, which includes a MDL file and three source data files. Then we open OWB design center, import orphan_management.mdl by using the menu File->Import->Warehouse Builder Metadata. Now we have several objects in BI_DEMO project as below: Mapping LOAD_CHANNELS_OM: The mapping for dimension loading. Mapping LOAD_SALES_OM: The mapping for cube loading. Dimension CHANNELS_OM: The dimension that contains channels data. Cube SALES_OM: The cube that contains sales data. Table CHANNELS_OM: The star implementation table of dimension CHANNELS_OM. Table SALES_OM: The star implementation table of cube SALES_OM. Table SRC_CHANNELS: The source table of channels data, that will be loaded into dimension CHANNELS_OM. Table SRC_ORDERS and SRC_ORDER_ITEMS: The source tables of sales data that will be loaded into cube SALES_OM. Sequence CLASS_OM_DIM_SEQ: The sequence used for loading dimension CHANNELS_OM. Dimension CHANNELS_OM This dimension has a hierarchy with three levels: TOTAL, CLASS and CHANNEL. Each level has three attributes: ID (surrogate key), NAME and SOURCE_ID (business key). It has a standard star implementation. The orphan management policy and the default parent setting are shown in the following screenshots: The orphan management policy options that you can set for loading are: Reject Orphan: The record is not inserted. Default Parent: You can specify a default parent record. This default record is used as the parent record for any record that does not have an existing parent record. If the default parent record does not exist, Warehouse Builder creates the default parent record. You specify the attribute values of the default parent record at the time of defining the dimensional object. If any ancestor of the default parent does not exist, Warehouse Builder also creates this record. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. While removing data from a dimension, you can select one of the following orphan management policies: Reject Removal: Warehouse Builder does not allow you to delete the record if it has existing child records. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#insertedID1) Cube SALES_OM This cube is references to dimension CHANNELS_OM. It has three measures: AMOUNT, QUANTITY and COST. The orphan management policy setting are shown as following screenshot: The orphan management policy options that you can set for loading are: No Maintenance: Warehouse Builder does not actively detect, reject, or fix orphan rows. Default Dimension Record: Warehouse Builder assigns a default dimension record for any row that has an invalid or null dimension key value. Use the Settings button to define the default parent row. Reject Orphan: Warehouse Builder does not insert the row if it does not have an existing dimension record. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#BABEACDG) Mapping LOAD_CHANNELS_OM This mapping loads source data from table SRC_CHANNELS to dimension CHANNELS_OM. The operator CHANNELS_IN is bound to table SRC_CHANNELS; CHANNELS_OUT is bound to dimension CHANNELS_OM. The TOTALS operator is used for generating a constant value for the top level in the dimension. The CLASS_FILTER operator is used to filter out the “invalid” class name, so then we can see what will happen when those channel records with an “invalid” parent are loading into dimension. Some properties of the dimension operator in this mapping are important to orphan management. See the screenshot below: Create Default Level Records: If YES, then default level records will be created. This property must be set to YES for dimensions and cubes if one of their orphan management policies is “Default Parent” or “Default Dimension Record”. This property is set to NO by default, so the user may need to set this to YES manually. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the dimension editor. The values are set to the same as the dimension value when user drops the dimension into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the dimension. REMOVE Orphan Policy: This property is used when removing data from a dimension. Since the dimension loading type is set to LOAD in this example, this property is disabled. Mapping LOAD_SALES_OM This mapping loads source data from table SRC_ORDERS and SRC_ORDER_ITEMS to cube SALES_OM. This mapping seems a little bit complicated, but operators in the red rectangle are used to filter out and generate the records with “invalid” or “null” dimension keys. Some properties of the cube operator in a mapping are important to orphan management. See the screenshot below: Enable Source Aggregation: Should be checked in this example. If the default dimension record orphan policy is set for the cube operator, then it is recommended that source aggregation also be enabled. Otherwise, the orphan management processing may produce multiple fact rows with the same default dimension references, which will cause an “unstable rowset” execution error in the database, since the dimension refs are used as update match attributes for updating the fact table. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the cube editor. The values are set to the same as in the cube editor when the user drops the cube into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the cube. 2. Deploy objects and mappings We now can deploy the objects. First, make sure location SALES_WH_LOCAL has been correctly configured. Then open Control Center Manager by using the menu Tools->Control Center Manager. Expand BI_DEMO->SALES_WH_LOCAL, click SALES_WH node on the project tree. We can see the following objects: Deploy all the objects in the following order: Sequence CLASS_OM_DIM_SEQ Table CHANNELS_OM, SALES_OM, SRC_CHANNELS, SRC_ORDERS, SRC_ORDER_ITEMS Dimension CHANNELS_OM Cube SALES_OM Mapping LOAD_CHANNELS_OM, LOAD_SALES_OM Note that we deployed source tables as well. Normally, we import source table from database instead of deploying them to target schema. However, in this example, we designed the source tables in OWB and deployed them to database for the purpose of this demonstration. 3. Prepare and examine source data Before running the mappings, we need to populate and examine the source data first. Run SRC_CHANNELS.sql, SRC_ORDERS.sql and SRC_ORDER_ITEMS.sql as target user. Then we check the data in these three tables. Table SRC_CHANNELS SQL> select rownum, id, class, name from src_channels; Records 1~5 are correct; they should be loaded into dimension without error. Records 6,7 and 8 have null parents; they should be loaded into dimension with a default parent value, and should be inserted into error table at the same time. Records 9, 10 and 11 have “invalid” parents; they should be rejected by dimension, and inserted into error table. Table SRC_ORDERS and SRC_ORDER_ITEMS SQL> select rownum, a.id, a.channel, b.amount, b.quantity, b.cost from src_orders a, src_order_items b where a.id = b.order_id; Record 178 has null dimension reference; it should be loaded into cube with a default dimension reference, and should be inserted into error table at the same time. Record 179 has “invalid” dimension reference; it should be rejected by cube, and inserted into error table. Other records should be aggregated and loaded into cube correctly. 4. Run the mappings and examine the target data In the Control Center Manager, expand BI_DEMO-> SALES_WH_LOCAL-> SALES_WH-> Mappings, right click on LOAD_CHANNELS_OM node, click Start. Use the same way to run mapping LOAD_SALES_OM. When they successfully finished, we can check the data in target tables. Table CHANNELS_OM SQL> select rownum, total_id, total_name, total_source_id, class_id,class_name, class_source_id, channel_id, channel_name,channel_source_id from channels_om order by abs(dimension_key); Records 1,2 and 3 are the default dimension records for the three levels. Records 8, 10 and 15 are the loaded records that originally have null parents. We see their parents name (class_name) is set to DEF_CLASS_NAME. Those records whose CHANNEL_NAME are Special_4, Special_5 and Special_6 are not loaded to this table because of the invalid parent. Error Table CHANNELS_OM_ERR SQL> select rownum, class_source_id, channel_id, channel_name,channel_source_id, err$$$_error_reason from channels_om_err order by channel_name; We can see all the record with null parent or invalid parent are inserted into this error table. Error reason is “Default parent used for record” for the first three records, and “No parent found for record” for the last three. Table SALES_OM SQL> select a.*, b.channel_name from sales_om a, channels_om b where a.channels=b.channel_id; We can see the order record with null channel_name has been loaded into target table with a default channel_name. The one with “invalid” channel_name are not loaded. Error Table SALES_OM_ERR SQL> select a.amount, a.cost, a.quantity, a.channels, b.channel_name, a.err$$$_error_reason from sales_om_err a, channels_om b where a.channels=b.channel_id(+); We can see the order records with null or invalid channel_name are inserted into error table. If the dimension reference column is null, the error reason is “Default dimension record used for fact”. If it is invalid, the error reason is “Dimension record not found for fact”. Summary In summary, this article illustrated the Orphan Management feature in OWB 11gR2. Automated orphan management policies improve ETL developer and administrator productivity by addressing an important cause of cube and dimension load failures, without requiring developers to explicitly build logic to handle these orphan rows.

    Read the article

  • Help with a simple incremental backup script

    - by Evan
    I'd like to run the following incomplete script weekly in as a cron job to backup my home directory to an external drive mounted as /mnt/backups #!/bin/bash # TIMEDATE=$(date +%b-%d-%Y-%k:%M) LASTBACKUP=pathToDirWithLastBackup rsync -avr --numeric-ids --link-dest=$LASTBACKUP /home/myfiles /mnt/backups/myfiles$TIMEDATE My first question is how do I correctly set LASTBACKUP to the the the directory in /backs most recently created? Secondly, I'm under the impression that using --link-desk will mean that files in previous backups will not will not copied in later backups if they still exist but will rather symbolically link back to the originally copied files? However, I don't want to retain old files forever. What would be the best way to remove all the backups before a certain date without losing files that may think linked in those backups by currents backups? Basically I'm looking to merge all the files before a certain date to a certain date if that makes more sense than the way I initially framed the question :). Can --link-dest create hard links, and if so, just deleting previous directories wouldn't actually remove linked file? Finally I'd like to add a line to my script that compresses each newly created backup folder (/mnt/backups/myfiles$TIMEDATE). Based on reading this question, I was wondering if I could just use this line gzip --rsyncable /backups/myfiles$TIMEDATE after I run rsync so that sequential rsync --link-dest executions would find already copied and compressed files? I know that's a lot, so many thanks in advance for your help!!

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >