Search Results

Search found 20946 results on 838 pages for 'at command'.

Page 789/838 | < Previous Page | 785 786 787 788 789 790 791 792 793 794 795 796  | Next Page >

  • SQL SERVER – SSMS: Memory Usage By Memory Optimized Objects Report

    - by Pinal Dave
    At conferences and at speaking engagements at the local UG, there is one question that keeps on coming which I wish were never asked. The question around, “Why is SQL Server using up all the memory and not releasing even when idle?” Well, the answer can be long and with the release of SQL Server 2014, this got even more complicated. This release of SQL Server 2014 has the option of introducing In-Memory OLTP which is completely new concept and our dependency on memory has increased multifold. In reality, nothing much changes but we have memory optimized objects (Tables and Stored Procedures) additional which are residing completely in memory and improving performance. As a DBA, it is humanly impossible to get a hang of all the innovations and the new features introduced in the next version. So today’s blog is around the report added to SSMS which gives a high level view of this new feature addition. This reports is available only from SQL Server 2014 onwards because the feature was introduced in SQL Server 2014. Earlier versions of SQL Server Management Studio would not show the report in the list. If we try to launch the report on the database which is not having In-Memory File group defined, then we would see the message in report. To demonstrate, I have created new fresh database called MemoryOptimizedDB with no special file group. Here is the query used to identify whether a database has memory-optimized file group or not. SELECT TOP(1) 1 FROM sys.filegroups FG WHERE FG.[type] = 'FX' Once we add filegroup using below command, we would see different version of report. USE [master] GO ALTER DATABASE [MemoryOptimizedDB] ADD FILEGROUP [IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA GO The report is still empty because we have not defined any Memory Optimized table in the database.  Total allocated size is shown as 0 MB. Now, let’s add the folder location into the filegroup and also created few in-memory tables. We have used the nomenclature of IMO to denote “InMemory Optimized” objects. USE [master] GO ALTER DATABASE [MemoryOptimizedDB] ADD FILE ( NAME = N'MemoryOptimizedDB_IMO', FILENAME = N'E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\MemoryOptimizedDB_IMO') TO FILEGROUP [IMO_FG] GO You may have to change the path based on your SQL Server configuration. Below is the script to create the table. USE MemoryOptimizedDB GO --Drop table if it already exists. IF OBJECT_ID('dbo.SQLAuthority','U') IS NOT NULL DROP TABLE dbo.SQLAuthority GO CREATE TABLE dbo.SQLAuthority ( ID INT IDENTITY NOT NULL, Name CHAR(500)  COLLATE Latin1_General_100_BIN2 NOT NULL DEFAULT 'Pinal', CONSTRAINT PK_SQLAuthority_ID PRIMARY KEY NONCLUSTERED (ID), INDEX hash_index_sample_memoryoptimizedtable_c2 HASH (Name) WITH (BUCKET_COUNT = 131072) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO As soon as above script is executed, table and index both are created. If we run the report again, we would see something like below. Notice that table memory is zero but index is using memory. This is due to the fact that hash index needs memory to manage the buckets created. So even if table is empty, index would consume memory. More about the internals of how In-Memory indexes and tables work will be reserved for future posts. Now, use below script to populate the table with 10000 rows INSERT INTO SQLAuthority VALUES (DEFAULT) GO 10000 Here is the same report after inserting 1000 rows into our InMemory table.    There are total three sections in the whole report. Total Memory consumed by In-Memory Objects Pie chart showing memory distribution based on type of consumer – table, index and system. Details of memory usage by each table. The information about all three is taken from one single DMV, sys.dm_db_xtp_table_memory_stats This DMV contains memory usage statistics for both user and system In-Memory tables. If we query the DMV and look at data, we can easily notice that the system tables have negative object IDs.  So, to look at user table memory usage, below is the over-simplified version of query. USE MemoryOptimizedDB GO SELECT OBJECT_NAME(OBJECT_ID), * FROM sys.dm_db_xtp_table_memory_stats WHERE OBJECT_ID > 0 GO This report would help DBA to identify which in-memory object taking lot of memory which can be used as a pointer for designing solution. I am sure in future we will discuss at lengths the whole concept of In-Memory tables in detail over this blog. To read more about In-Memory OLTP, have a look at In-Memory OLTP Series at Balmukund’s Blog. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Memory, SQL Reports

    Read the article

  • Why can't I change the AU_AU locale to en_US?

    - by sachin
    /bin/bash: warning: setlocale: LC_ALL: cannot change locale ( (unset)) Generating locales... en_US.ISO-8859-1... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done Generation complete. ganesha@ubuntu:~$ sudo update_locale LANG=en_US sudo: update_locale: command not found ganesha@ubuntu:~$ sudo update-locale LANG=en_US perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US:en", LC_ALL = " (unset)", LC_PAPER = "AU_AU.UTF-8", LC_ADDRESS = "AU_AU.UTF-8", LC_MONETARY = "AU_AU.UTF-8", LC_MEASUREMAUT = "AU_AU.UTF-8", LC_NUMERIC = "AU_AU.UTF-8", LC_TELEPHONE = "AU_AU.UTF-8", LC_MESSAGES = "AU_AU.UTF-8", LC_COLLATE = "AU_AU.UTF-8", LC_IDAUTIFICATION = "AU_AU.UTF-8", LC_CTYPE = "AU_AU.UTF-8", LC_TIME = "AU_AU.UTF-8", LC_NAME = "AU_AU.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). ganesha@ubuntu:~$ sudo update-locale LANG=en_US.UTF8 perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US:en", LC_ALL = " (unset)", LC_PAPER = "AU_AU.UTF-8", LC_ADDRESS = "AU_AU.UTF-8", LC_MONETARY = "AU_AU.UTF-8", LC_MEASUREMAUT = "AU_AU.UTF-8", LC_NUMERIC = "AU_AU.UTF-8", LC_TELEPHONE = "AU_AU.UTF-8", LC_MESSAGES = "AU_AU.UTF-8", LC_COLLATE = "AU_AU.UTF-8", LC_IDAUTIFICATION = "AU_AU.UTF-8", LC_CTYPE = "AU_AU.UTF-8", LC_TIME = "AU_AU.UTF-8", LC_NAME = "AU_AU.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). ganesha@ubuntu:~$ gedit ~/.profile (process:6467): Gtk-WARNING **: Locale not supported by C library. Using the fallback 'C' locale. ganesha@ubuntu:~$ sudo update-locale LANG=en_US.UTF8 perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US:en", LC_ALL = " (unset)", LC_PAPER = "AU_AU.UTF-8", LC_ADDRESS = "AU_AU.UTF-8", LC_MONETARY = "AU_AU.UTF-8", LC_MEASUREMAUT = "AU_AU.UTF-8", LC_NUMERIC = "AU_AU.UTF-8", LC_TELEPHONE = "AU_AU.UTF-8", LC_MESSAGES = "AU_AU.UTF-8", LC_COLLATE = "AU_AU.UTF-8", LC_IDAUTIFICATION = "AU_AU.UTF-8", LC_CTYPE = "AU_AU.UTF-8", LC_TIME = "AU_AU.UTF-8", LC_NAME = "AU_AU.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). ganesha@ubuntu:~$ sudo apt-get install --reinstall locales Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 0 not upgraded. Need to get 0 B/3359 kB of archives. After this operation, 0 B of additional disk space will be used. perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US:en", LC_ALL = " (unset)", LC_TIME = "AU_AU.UTF-8", LC_MONETARY = "AU_AU.UTF-8", LC_CTYPE = "AU_AU.UTF-8", LC_COLLATE = "AU_AU.UTF-8", LC_ADDRESS = "AU_AU.UTF-8", LC_TELEPHONE = "AU_AU.UTF-8", LC_MESSAGES = "AU_AU.UTF-8", LC_NAME = "AU_AU.UTF-8", LC_MEASUREMAUT = "AU_AU.UTF-8", LC_IDAUTIFICATION = "AU_AU.UTF-8", LC_NUMERIC = "AU_AU.UTF-8", LC_PAPER = "AU_AU.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). (Reading database ... 157848 files and directories currently installed.) Preparing to replace locales 2.13+git20120306-3 (using .../locales_2.13+git20120306-3_all.deb) ... Unpacking replacement locales ... Processing triggers for man-db ... Setting up locales (2.13+git20120306-3) ... /bin/bash: warning: setlocale: LC_ALL: cannot change locale ( (unset)) Generating locales... en_AG.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_AU.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_BW.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_CA.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_DK.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_GB.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_HK.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_IE.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_IN.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_NG.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_NZ.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_PH.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_SG.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_US.ISO-8859-1... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_US.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_ZA.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_ZM.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_ZW.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done Generation complete. ganesha@ubuntu:~$ How to correct this?

    Read the article

  • It’s nice to be important, but it’s more important to be nice

    - by BuckWoody
    I’ve been a little “preachy” lately, telling you that you should let people finish their sentences, and always check a problem out before you tell a user that their issue is “impossible”. Well, I’ll round that out with one more tip today. Keep in mind that all of these things are actions I’ve been guilty of, hopefully in the past. I’m kind of a “work in progress”. And yes, I know these tips are coming from someone who picks on people in presentations, but that is of course done in fun, and (hopefully) with the audience’s knowledge.   (No, this isn’t aimed at any one person or event in particular – I just see it happen a lot)   I’ve seen, unfortunately over and over, someone in authority react badly to someone who is incorrect, or at least perceived to be incorrect. This might manifest itself in a comment, post, question or whatever, but the point is that I’ve seen really intelligent people literally attack someone they view as getting something wrong. Don’t misunderstand me; if someone posts that you should always drop a production database in the middle of the day I think you should certainly speak up and mention that this might be a bad idea!  No, I’m talking about generalizations or even incorrect statements done in good faith. Let me explain with an example.   Suppose someone makes the statement: “If you don’t have enough space on your system, you can just use a DBCC command to shrink the database”. Let’s take two responses to this statement.   Response One: “That’s insane. Everyone knows that shrinking a database is a stupid idea, you’re just going to fragment your indexes all over the place.” Response Two: “That’s an interesting take – in my experience and from what I’ve read here (someurl.com) I think this might not be a universal best practice.”   Of course, both responses let the person making the statement and those reading it know that you don’t agree, and that it’s probably wrong. But the person you responded to and the general audience hearing you (or reading your response) might form two different opinions of you.   The first response says to me “this person really needs to be right, and takes arguments personally. They aren’t thinking of the other person at all, or the folks reading or hearing the exchange. They turned an incorrect technical statement into a personal attack. They haven’t left the other party any room to ‘save face’, and they have potentially turned what could be a positive learning experience for everyone into a negative. Also, they sound more than just a little arrogant.”   The second response says to me “this person has left room for everyone to save face, has presented evidence to the contrary and is thinking about moving the ball forward and getting it right rather than attacking someone for getting it wrong.” It’s the idea of questioning a statement rather than attacking a person.   Perhaps you have a different take. Maybe you think the “direct” approach is best – and maybe that’s worked for you. Something to consider is what you’ve really accomplished while using that first method. Sure, the info you provide is correct, and perhaps someone out there won’t shrink a database because of your response – but perhaps you’ve turned a lot more people off, and now they won’t listen to your other valuable information. You’ll be an expert, but another one of the nameless, arrogant jerks in technology. And I don’t think anyone likes to be thought of that way.   OK, I’ll get down off of the high-horse now. And I’ll keep the title of this entry (said to me by my grandmother when I was a little kid) in mind when I dismount. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Adopting DBVCS

    - by Wes McClure
    Identify early adopters Pick a small project with a small(ish) team.  This can be a legacy application or a green-field application. Strive to find a team of early adopters that will be eager to try something new. Get the team on board! Research Research the tool(s) that you want to use.  Some tools provide all of the features you would need while some only provide a slice of the pie.  DBVCS requires the ability to manage a set of change scripts that update a database from one version to the next.  Ideally a tool can track database versions and automatically apply updates.  The change script generation process can be manual, but having diff tools available to automatically generate it can really reduce the overhead to adoption.  Finally, an automated tool to generate a script file per database object is an added bonus as your version control system can quickly identify what was changed in a commit (add/del/modify), just like with code changes. Don’t settle on just one tool, identify several.  Then work with the team to evaluate the tools.  Have the team do some tests of the following scenarios with each tool: Baseline an existing database: can the migration tool work with legacy databases?  Caution: most migration platforms do not support baselines or have poor support, especially the fad of fluent APIs. Add/drop tables Add/drop procedures/functions/views Alter tables (rename columns, add columns, remove columns) Massage data – migrations sometimes involve changing data types that cannot be implicitly casted and require you to decide how the data is explicitly cast to the new type.  This is a requirement for a migrations platform.  Think about a case where you might want to combine fields, or move a field from one table to another, you wouldn’t want to lose the data. Run the tool via the command line.  If you cannot automate the tool in Continuous Integration what is the point? Create a copy of a database on demand. Backup/restore databases locally. Let the team give feedback and decide together, what tool they would like to try out. My recommendation at this point would be to include TSqlMigrations and RoundHouse as SQL based migration platforms.  In general I would recommend staying away from the fluent platforms as they often lack baseline capabilities and add overhead to learn a new API when SQL is already a very well known DSL.  Code migrations often get messy with procedures/views/functions as these have to be created with SQL and aren’t cross platform anyways.  IMO stick to SQL based migrations. Reconciling Production If your project is a legacy application, you will need to reconcile the current state of production with your development databases.  Find changes in production and bring them down to development, even if they are old and need to be removed.  Once complete, produce a baseline of either dev or prod as they are now in sync.  Commit this to your VCS of choice. Add whatever schema changes tracking mechanism your tool requires to your development database.  This often requires adding a table to track the schema version of that database.  Your tool should support doing this for you.  You can add this table to production when you do your next release. Script out any changes currently in dev.  Remove production artifacts that you brought down during reconciliation.  Add change scripts for any outstanding changes in dev since the last production release.  Commit these to your repository.   Say No to Shared Dev DBs Simply put, you wouldn’t dream of sharing a code checkout, why would you share a development database?  If you have a shared dev database, back it up, distribute the backups and take the shared version offline (including the dev db server once all projects are using DB VCS).  Doing DB VCS with a shared database is bound to cause problems as people won’t be able to easily script out their own changes from those that others are working on.   First prod release Copy prod to your beta/testing environment.  Add the schema changes table (or mechanism) and do a test run of your changes.  If successful you can schedule this to be run on production.   Evaluation After your first release, evaluate the pain points of the process.  Try to find tools or modifications to existing tools to help fix them.  Don’t leave stones unturned, iteratively evolve your tools and practices to make the process as seamless as possible.  This is why I suggest open source alternatives.  Nothing is set in stone, a good example was adding transactional support to TSqlMigrations.  We ran into situations where an update would break a database, so I added a feature to do transactional updates and rollback on errors!  Another good example is generating change scripts.  We have been manually making these for months now.  I found an open source project called Open DB Diff and integrated this with TSqlMigrations.  These were things we just accepted at the time when we began adopting our tool set.  Once we became comfortable with the base functionality, it was time to start automating more of the process.  Just like anything else with development, never be afraid to try to find tools to make your job easier!   Enjoy -Wes

    Read the article

  • Programmatically use a server as the Build Server for multiple Project Collections

    Important: With this post you create an unsupported scenario by Microsoft. It will break your support for this server with Microsoft. So handle with care. I am the administrator an a TFS environment with a lot of Project Collections. In the supported configuration of Microsoft 2010 you need one Build Controller per Project Collection, and it is not supported to have multiple Build Controllers installed. Jim Lamb created a post how you can modify your system to change this behaviour. But since I have so many Project Collections, I automated this with the API of TFS. When you install a new build server via the UI, you do the following steps Register the build service (with this you hook the windows server into the build server environment) Add a new build controller Add a new build agent So in pseudo code, the code would look like foreach (projectCollection in GetAllProjectCollections) {       CreateNewWindowsService();       RegisterService();       AddNewController();       AddNewAgent(); } The following code fragements show you the most important parts of the method implementations. Attached is the full project. CreateNewWindowsService We create a new windows service with the SC command via the Diagnostics.Process class:             var pi = new ProcessStartInfo("sc.exe")                         {                             Arguments =                                 string.Format(                                     "create \"{0}\" start= auto binpath= \"C:\\Program Files\\Microsoft Team Foundation Server 2010\\Tools\\TfsBuildServiceHost.exe              /NamedInstance:{0}\" DisplayName= \"Visual Studio Team Foundation Build Service Host ({1})\"",                                     serviceHostName, tpcName)                         };            Process.Start(pi);             pi.Arguments = string.Format("failure {0} reset= 86400 actions= restart/60000", serviceHostName);            Process.Start(pi); RegisterService The trick in this method is that we set the NamedInstance static property. This property is Internal, so we need to set it through reflection. To get information on these you need nice Microsoft friends and the .Net reflector .             // Indicate which build service host instance we are using            typeof(BuildServiceHostUtilities).Assembly.GetType("Microsoft.TeamFoundation.Build.Config.BuildServiceHostProcess").InvokeMember("NamedInstance",              System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.SetProperty | System.Reflection.BindingFlags.Static, null, null, new object[] { serviceName });             // Create the build service host            serviceHost = buildServer.CreateBuildServiceHost(serviceName, endPoint);            serviceHost.Save();             // Register the build service host            BuildServiceHostUtilities.Register(serviceHost, user, password); AddNewController and AddNewAgent Once you have the BuildServerHost, the rest is pretty straightforward. There are methods on the BuildServerHost to modify the controllers and the agents                 controller = serviceHost.CreateBuildController(controllerName);                 agent = controller.ServiceHost.CreateBuildAgent(agentName, buildDirectory, controller);                controller.AddBuildAgent(agent); You have now seen the highlights of the application. If you need it and want to have sample information when you work in this area, download the app TFS2010_RegisterBuildServerToTPCs

    Read the article

  • Play Your Favorite DOS Games in XP, Vista, and Windows 7

    - by Matthew Guay
    Want to take a trip down memory lane with old school DOS games?  D-Fend Reloaded makes it easy for you to play your favorite DOS games directly on XP, Vista, and Windows 7. D-Fend Reloaded is a great frontend for DOSBox, the popular DOS emulator.  It lets you install and run many DOS games and applications directly from its interface without ever touching a DOS prompt.  It works great on XP, Vista, and Windows 7 32 & 64-bit versions.   Getting Started Download D-Fend Reloaded (link below), and install with the default settings.  You don’t need to install DOSBox, as D-Fend Reloaded will automatically install all the components you need to run DOS games on Windows. D-Fend Reloaded can also be installed as a portable application, so you can run it from a flash drive on any Windows computer by selecting User defined installation. Then select Portable mode installation. Once D-Fend Reloaded is installed, you can go ahead and open the program. Then simply click “Accept all settings” to apply the default settings.   D-Fend is now ready to run all of your favorite DOS games. Installing DOS Games and Applications: To install a DOS game or application, simply drag-and-drop a zip file of the app into D-Fend Reloaded’s window.  D-Fend Reloaded will automatically extract the program… Then will ask you to name the application and choose where to store it — by default it uses the name of the DOS app. Now you’ll see a new entry for the app you just installed.  Simply double-click to run it.   D-Fend will remind you that you can switch out of fullscreen mode by pressing Alt+Enter, and can also close the DOS application by pressing Ctrl+F9.  Press Ok to run the program. Here we’re running Ms. PacPC, a remake of the classic game Ms. Pac-Man, in full-screen mode.  All features work automatically, including sound, and you never have to setup anything from DOS command line — it just works. Here it’s in windowed mode running on Windows 7. Please note that your color scheme may change to Windows Basic while running DOS applications. You can run DOS application just as easily.  Here’s Word 5.5 running in in DOSBox through D-Fend Reloaded… Game Packs: Want to quickly install many old DOS freeware and trial games?  D-Fend Reloaded offers several game packs that let you install dozens of DOS games with only four clicks…just download and run the game pack installer of your choice (link below). Now you’ve got a selection of DOS games to choose from. Here’s a group of poor lemmings walking around … in Windows 7. Conclusion D-Fend Reloaded gives you a great way to run your favorite DOS games and applications directly from XP, Vista, and Windows 7.  Give it a try, and relive your DOS days from the comfort of your Windows desktop. What were some of your favorite DOS games and applications? Leave a comment and let us know. Links Download D-Fend Reloaded Download DOS game packs for D-Fend Reloaded Download Ms. Pac-PC Similar Articles Productive Geek Tips Friday Fun: Get Your Mario OnFriday Fun: Go Retro with PacmanThursday’s Pre-Holiday Lazy Links RoundupFriday Fun: Five More Time Wasting Online GamesFriday Fun: Holiday Themed Games TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional The Growth of Citibank Quickly Switch between Tabs in IE Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier

    Read the article

  • How to Add a Business Card, or vCard (.vcf) File, to a Signature in Outlook 2013 Without Displaying an Image

    - by Lori Kaufman
    Whenever you add a Business Card to your signature in Outlook 2013, the Signature Editor automatically generates a picture of it and includes that in the signature as well as attaching the .vcf file. However, there is a way to leave out the image. To remove the business card image from your signature but maintain the attached .vcf file, you must make a change to the registry. NOTE: Before making changes to the registry, be sure you back it up. We also recommend creating a restore point you can use to restore your system if something goes wrong. Before changing the registry, we must add the Business Card to the signature and save it so a .vcf file of the contact is created in the Signatures folder. To do this, click the File tab. Click Options in the menu list on the left side of the Account Information screen. On the Outlook Options dialog box, click Mail in the list of options on the left side of the dialog box. On the Mail screen, click Signatures in the Compose messages section. For this example, we will create a new signature to include the .vcf file for your business card without the image. Click New below the Select signature to edit box. Enter a name for the new signature, such as Business Card, and click OK. Enter text in the signature editor and format it the way you want or insert a different image or logo. Click Business Card above the signature editor. Select the contact you want to include in the signature on the Insert Business Card dialog box and click OK. Click Save below the Select signature to edit box. This creates a .vcf file for the business card in the Signatures folder. Click on the business card image in the signature and delete it. You should only see your formatted text or other image or logo in the signature editor. Click OK to save your new signature and close the signature editor. Close Outlook as well. Now, we will open the Registry Editor to add a key and value to indicate where to find the .vcf to include in the signature we just created. If you’re running Windows 8, press the Windows Key + X to open the command menu and select Run. You can also press the Windows Key + R to directly access the Run dialog box. NOTE: In Windows 7, select Run from the Start menu. In the Open edit box on the Run dialog box, enter “regedit” (without the quotes) and click OK. If the User Account Control dialog box displays, click Yes to continue. NOTE: You may not see this dialog box, depending on your User Account Control settings. Navigate to the following registry key: HKEY_CURRENT_USER\Software\Microsoft\Office\15.0\Outlook\Signatures Make sure the Signatures key is selected. Select New | String Value from the Edit menu. NOTE: You can also right-click in the empty space in the right pane and select New | String Value from the popup menu. Rename the new value to the name of the Signature you created. For this example, we named the value Business Card. Double-click on the new value. In the Value data edit box on the Edit String dialog box, enter the value indicating the location of the .vcf file to include in the signature. The format is: <signature name>_files\<name of .vcf file> For our example, the Value data should be as follows: Business Card_files\Lori Kaufman The name of the .vcf file is generally the contact name. If you’re not sure of what to enter for the Value data for the new key value, you can check the location and name of the .vcf file. To do this, open the Outlook Options dialog box and access the Mail screen as instructed earlier in this article. However, press and hold the Ctrl key while clicking the Signatures button. The Signatures folder opens in Windows Explorer. There should be a folder in the Signatures folder named after the signature you created with “_files” added to the end. For our example, the folder is named Business Card_files. Open this folder. In this folder, you should see a .vcf file with the name of your contact as the name of the file. For our contact, the file is named Lori Kaufman.vcf. The path to the .vcf file should be the name of the folder for the signature (Business Card_files), followed by a “\”, and the name of the .vcf file without the extension (Lori Kaufman). Putting these names together, you get the path that should be entered as the Value data in the new key you created in the Registry Editor. Business Card_files\Lori Kaufman Once you’ve entered the Value data for the new key, select Exit from the File menu to close the Registry Editor. Open Outlook and click New Email on the Home tab. Click Signature in the Include section of the New Mail Message tab and select your new signature from the drop-down menu. NOTE: If you made the new signature the default signature, it will be automatically inserted into the new mail message. The .vcf file is attached to the email message, but the business card image is not included. All you will see in the body of the email message is the text or other image you included in the signature. You can also choose to include an image of your business card in a signature with no .vcf file attached.     

    Read the article

  • Information Indepth Newsletter - Linux Edition

    - by Paulo Folgado
    INFORMATION INDEPTH NEWSLETTERLinux Edition February 2011 Stay Connected:  NEWS Now Available: Oracle Linux 6 Get the latest release of Oracle Linux 6, which includes Unbreakable Enterprise Kernel.Download Oracle Linux 6 Read More Customers Succeed by Using Oracle Exadata with Oracle Linux Watch IT executives from Bank of America, Linkshare, and Johns Hopkins as they talk about the business challenges they faced and why they chose to use Oracle Linux along with Oracle Exadata as the solution. Watch Now Video Interview: Oracle Senior Vice President Wim Coekaerts Watch Wim Coekaerts, senior vice president, Linux and Virtualization Engineering, as he talks about use cases for Oracle VM Templates as well as the Unbreakable Enterprise Kernel for Linux.Watch Now Hot Off the Press: Migrate Your IBM AIX Environment to Oracle Linux This new white paper provides recommendations for planning and implementing the migration of applications from an IBM Power System running AIX to Oracle's Sun Fire X4800 Server with Intel Xeon 7560 Processor running Oracle Linux 5.5.Read More  Back to Top BLOGOSPHERE Just Launched: The Oracle Linux Blog Follow our new Oracle Linux blog  to hear the latest updates, product news, upcoming events, and all the latest happenings, directly from the Linux team at Oracle. Back to Top TECH DIVE NEW: Linux/Oracle Solaris CommandComparo Site from Oracle Technology NetworkThis site gives equivalent command syntax in Oracle Solaris 10 and Oracle Enterprise Linux 5 for common administrative tasks--focusing particularly on tasks that have tricky syntax or that you frequently need to double check. It acts as a quick reference for administrators who operate in these two OS environments. Free Download: Oracle Linux Release 5.6Did you know that by using Oracle Linux 5.5 or 5.6 along with the Unbreakable Enterprise Kernel, you can get all the benefits of Linux mainline kernel 2.6.32 and more, right now, without the need to reinstall or migrate to a new operating system such as RHEL6?Read Release NotesDownload Oracle Linux 5.6 LSB 4.0 Certification Completed for Oracle Linux 5.5Oracle Linux 5.5 with Unbreakable Enterprise Kernel successfully completed the LSB 4.0 certification.  Back to Top WEBCASTS Boost Your Linux Performance with Oracle's Enhancements in Infiniband and RDSRegister to hear Director of Kernel Engineering Chris Mason cover scalability and performance improvements in Linux environment. Get the Facts Oracle's Unbreakable Enterprise KernelSVP Wim Coekaerts and Senior Director Monica Kumar cover the facts about and benefits of using Unbreakable Enterprise Kernel.  View Other Webcasts on Demand   Back to Top EVENTS Collaborate 2011April 10-14 Orlando, Florida Cloud Summit Events, WorldwideVarious dates (check the city for date/time of event) Datacenter Efficiency Events WorldwideThese events include Linux and Oracle VM sessions.Various dates (check the city for date/time of event) Virtualization Events in North America Find an Oracle Event  Back to Top EDUCATION Get Oracle Linux Certified from Oracle University Oracle University offers courses in both Oracle Linux and the administration of Oracle Database on Linux.  Back to Top CUSTOMER SPOTLIGHT Pella Corporation Improves IT Performance and Efficiency with Oracle Linux and Oracle VM To improve IT performance and efficiency and lower operational costs, Pella Corporation, has standardized on Oracle VM and Oracle Linux. Read More Disney Store Deploys POS in 330 Stores and 7 Countries on Oracle Linux Disney Store is running 1,500 registers worldwide on a broad Oracle technology software stack including Oracle Database 11g, Oracle Fusion Middleware, and Oracle Linux. Read More Back to Top PARTNER SPOTLIGHT Emulex and Oracle Announce Data Integrity Features The Unbreakable Enterprise Kernel provides data integrity checking between Oracle Database applications and Emulex 8Gb/s LightPulse Fibre Channel Host Bus Adapters. Read More Dell Inc. Dell Inc. tested and validated configurations support Oracle Linux. Back to Top STAY IN TOUCH Follow @ORCL_Linux on Twitter for the latest penguin tweets Bookmark Oracle.com/Linux Read the Oracle Linux blog Back to Top  Oracle Information InDepth newsletters bring targeted news, articles, customer stories, and special offers to business people who want to find out how to streamline enterprise information management, measure results, improve business processes, and communicate a single truth to their constituents. Please send questions or comments to [email protected]. For answers to questions about subscribing, unsubscribing, and managing your Oracle e-mail communications preferences, please see the Oracle E-Mail Communications page. Copyright © 2011, Oracle Corporation and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor is it subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. 

    Read the article

  • How to build Open JavaFX for Android.

    - by PictureCo
    Here's a short recipe for baking JavaFX for Android dalvik. We will need just a few ingredients but each one requires special care. So let's get down to the business.  SourcesThe first ingredient is an open JavaFX repository. This should be piece of cake. As always there's a catch. You probably know that dalvik is jdk6 compatible  and also that certain APIs are missing comparing to good old java vm from Oracle.  Fortunately there is a repository which is a backport of regular OpenJFX to jdk7 and going from jdk7 to jdk6 is possible. The first thing to do is to clone or download the repository from https://bitbucket.org/narya/jfx78. Main page of the project says "It works in some cases" so we will presume that it will work in most cases As I've said dalvik vm misses some APIs which would lead to a build failures. To get them use another compatibility repository which is available on GitHub https://github.com/robovm/robovm-jfx78-compat. Download the zip and unzip sources into jfx78/modules/base.We need also a javafx binary stubs. Use jfxrt.jar from jdk8.The last thing to download are freetype sources from http://freetype.org. These will be necessary for native font rendering. Toolchain setup I have to point out that these instructions were tested only on linux. I suppose they will work with minimal changes also on Mac OS. I also presume that you were able to build open JavaFX. That means all tools like ant, gradle, gcc and jdk8 have been installed and are working all right. In addition to this you will need to download and install jdk7, Android SDK and Android NDK for native code compilation.  Installing all of them will take some time. Don't forget to put them in your path. export ANDROID_SDK=/opt/android-sdk-linux export ANDROID_NDK=/opt/android-ndk-r9b export JAVA_HOME=/opt/jdk1.7.0 export PATH=$JAVA_HOME/bin:$ANDROID_SDK/tools:$ANDROID_SDK/platform-tools:$ANDROID_NDK FreetypeUnzip freetype release sources first. We will have to cross compile them for arm. Firstly we will create a standalone toolchain for cross compiling installed in ~/work/ndk-standalone-19. $ANDROID_NDK/build/tools/make-standalone-toolchain.sh  --platform=android-19 --install-dir=~/work/ndk-standalone-19 After the standalone toolchain has been created cross compile freetype with following script: export TOOLCHAIN=~/work/freetype/ndk-standalone-19 export PATH=$TOOLCHAIN/bin:$PATH export FREETYPE=`pwd` ./configure --host=arm-linux-androideabi --prefix=$FREETYPE/install --without-png --without-zlib --enable-shared sed -i 's/\-version\-info \$(version_info)/-avoid-version/' builds/unix/unix-cc.mk make make install It will compile and install freetype library into $FREETYPE/install. We will link to this install dir later on. It would be possible also to link openjfx font support dynamically against skia library available on Android which already contains freetype. It creates smaller result but can have compatibility problems. Patching Download patches javafx-android-compat.patch + android-tools.patch and patch jfx78 repository. I recommend to have look at patches. First one android-compat.patch updates openjfx build script, removes dependency on SharedSecret classes and updates LensLogger to remove dependency on jdk specific PlatformLogger. Second one android-tools.patch creates helper script in android-tools. The script helps to setup javaFX Android projects. Building Now is time to try the build. Run following script: JAVA_HOME=/opt/jdk1.7.0 JDK_HOME=/opt/jdk1.7.0 ANDROID_SDK=/opt/android-sdk-linux ANDROID_NDK=/opt/android-ndk-r9b PATH=$JAVA_HOME/bin:$ANDROID_SDK/tools:$ANDROID_SDK/platform-tools:$ANDROID_NDK:$PATH gradle -PDEBUG -PDALVIK_VM=true -PBINARY_STUB=~/work/binary_stub/linux/rt/lib/ext/jfxrt.jar \ -PFREETYPE_DIR=~/work/freetype/install -PCOMPILE_TARGETS=android If everything went all right the output is in build/android-sdk Create first JavaFX Android project Use gradle script int android-tools. The script sets the project structure for you.   Following command creates Android HelloWorld project which links to a freshly built javafx runtime and to a HelloWorld application. NAME is a name of Android project. DIR where to create our first project. PACKAGE is package name required by Android. It has nothing to do with a packaging of javafx application. JFX_SDK points to our recently built runtime. JFX_APP points to dist directory of javafx application. (where all application jars sit) JFX_MAIN is fully qualified name of a main class. gradle -PDEBUG -PDIR=/home/user/work -PNAME=HelloWorld -PPACKAGE=com.helloworld \ -PJFX_SDK=/home/user/work/jfx78/build/android-sdk -PJFX_APP=/home/user/NetBeansProjects/HelloWorld/dist \ -PJFX_MAIN=com.helloworld.HelloWorld createProject Now cd to the created project and use it like any other android project. ant clean, debug, uninstall, installd will work. I haven't tried it from any IDE Eclipse nor Netbeans. Special thanks to Stefan Fuchs and Daniel Zwolenski for the repositories used in this blog post.

    Read the article

  • Clean up after Visual Studio

    - by psheriff
    As programmer’s we know that if we create a temporary file during the running of our application we need to make sure it is removed when the application or process is complete. We do this, but why can’t Microsoft do it? Visual Studio leaves tons of temporary files all over your hard drive. This is why, over time, your computer loses hard disk space. This blog post will show you some of the most common places where these files are left and which ones you can safely delete..NET Left OversVisual Studio is a great development environment for creating applications quickly. However, it will leave a lot of miscellaneous files all over your hard drive. There are a few locations on your hard drive that you should be checking to see if there are left-over folders or files that you can delete. I have attempted to gather as much data as I can about the various versions of .NET and operating systems. Of course, your mileage may vary on the folders and files I list here. In fact, this problem is so prevalent that PDSA has created a Computer Cleaner specifically for the Visual Studio developer.  Instructions for downloading our PDSA Developer Utilities (of which Computer Cleaner is one) are at the end of this blog entry.Each version of Visual Studio will create “temporary” files in different folders. The problem is that the files created are not always “temporary”. Most of the time these files do not get cleaned up like they should. Let’s look at some of the folders that you should periodically review and delete files within these folders.Temporary ASP.NET FilesAs you create and run ASP.NET applications from Visual Studio temporary files are placed into the <sysdrive>:\Windows\Microsoft.NET\Framework[64]\<vernum>\Temporary ASP.NET Files folder. The folders and files under this folder can be removed with no harm to your development computer. Do not remove the "Temporary ASP.NET Files" folder itself, just the folders underneath this folder. If you use IIS for ASP.NET development, you may need to run the iisreset.exe utility from the command prompt prior to deleting any files/folder under this folder. IIS will sometimes keep files in use in this folder and iisreset will release the locks so the files/folders can be deleted.Website CacheThis folder is similar to the ASP.NET Temporary Files folder in that it contains files from ASP.NET applications run from Visual Studio. This folder is located in each users local settings folder. The location will be a little different on each operating system. For example on Windows Vista/Windows 7, the folder is located at <sysdrive>:\Users\<UserName>\AppData\Local\Microsoft\WebsiteCache. If you are running Windows XP this folder is located at <sysdrive>:\ Documents and Settings\<UserName>\Local Settings\Application Data\Microsoft\WebsiteCache. Check these locations periodically and delete all files and folders under this directory.Visual Studio BackupThis backup folder is used by Visual Studio to store temporary files while you develop in Visual Studio. This folder never gets cleaned out, so you should periodically delete all files and folders under this directory. On Windows XP, this folder is located at <sysdrive>:\Documents and Settings\<UserName>\My Documents\Visual Studio 200[5|8]\Backup Files. On Windows Vista/Windows 7 this folder is located at <sysdrive>:\Users\<UserName>\Documents\Visual Studio 200[5|8]\.Assembly CacheNo, this is not the global assembly cache (GAC). It appears that this cache is only created when doing WPF or Silverlight development with Visual Studio 2008 or Visual Studio 2010. This folder is located in <sysdrive>:\ Users\<UserName>\AppData\Local\assembly\dl3 on Windows Vista/Windows 7. On Windows XP this folder is located at <sysdrive>:\ Documents and Settings\<UserName>\Local Settings\Application Data\assembly. If you have not done any WPF or Silverlight development, you may not find this particular folder on your machine.Project AssembliesThis is yet another folder where Visual Studio stores temporary files. You will find a folder for each project you have opened and worked on. This folder is located at <sysdrive>:\Documents and Settings\<UserName>Local Settings\Application Data\Microsoft\Visual Studio\[8|9].0\ProjectAssemblies on Windows XP. On Microsoft Vista/Windows 7 you will find this folder at <sysdrive>:\Users\<UserName>\AppData\Local\Microsoft\Visual Studio\[8|9].0\ProjectAssemblies.Remember not all of these folders will appear on your particular machine. Which ones do show up will depend on what version of Visual Studio you are using, whether or not you are doing desktop or web development, and the operating system you are using.SummaryTaking the time to periodically clean up after Visual Studio will aid in keeping your computer running quickly and increase the space on your hard drive. Another place to make sure you are cleaning up is your TEMP folder. Check your OS settings for the location of your particular TEMP folder and be sure to delete any files in here that are not in use. I routinely clean up the files and folders described in this blog post and I find that I actually eliminate errors in Visual Studio and I increase my hard disk space.NEW! PDSA has just published a “pre-release” of our PDSA Developer Utilities at http://www.pdsa.com/DeveloperUtilities that contains a Computer Cleaner utility which will clean up the above-mentioned folders, as well as a lot of other miscellaneous folders that get Visual Studio build-up. You can download a free trial at http://www.pdsa.com/DeveloperUtilities. If you wish to purchase our utilities through the month of November, 2011 you can use the RSVP code: DUNOV11 to get them for only $39. This is $40 off the regular price.NOTE: You can download this article and many samples like the one shown in this blog entry at my website. http://www.pdsa.com/downloads. Select “Tips and Tricks”, then “Developer Machine Clean Up” from the drop down list.Good Luck with your Coding,Paul Sheriff** SPECIAL OFFER FOR MY BLOG READERS **We frequently offer a FREE gift for readers of my blog. Visit http://www.pdsa.com/Event/Blog for your FREE gift!

    Read the article

  • repair broken packages-"dpkg: error: conflicting actions -f (--field) and -r (--remove)"

    - by yinon
    Ubuntu 12.04 LTS. if more information will be needed, tell me and'll give. the main problem is: tzach@tzach-pc:~$ sudo apt-get install docky [sudo] password for tzach: Reading package lists... Done Building dependency tree Reading state information... Done docky is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: ca-certificates-java : Depends: openjdk-6-jre-headless (>= 6b16-1.6.1-2) but it is not going to be installed or java6-runtime-headless openjdk-7-jre-lib : Depends: openjdk-7-jre-headless (>= 7~b130~pre0) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). tzach@tzach-pc:~$ and also: tzach@tzach-pc:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. **The following packages have unmet dependencies: ca-certificates-java : Depends: openjdk-6-jre-headless (>= 6b16-1.6.1-2) but it is not installed or java6-runtime-headless openjdk-7-jre-lib : Depends: openjdk-7-jre-headless (>= 7~b130~pre0) but it is not installed E: Unmet dependencies. Try using ******* so we tryied the guide here in messege #9: http://ubuntuforums.org/showthread.php?t=947124 we run all the first 4 commands and the last one-"sudo apt-get autoremove" gave us: tzach@tzach-pc:~$ sudo apt-get autoremove Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: **ca-certificates-java** : Depends: openjdk-6-jre-headless (>= 6b16-1.6.1-2) but it is not installed or java6-runtime-headless **openjdk-7-jre-lib** : Depends: openjdk-7-jre-headless (>= 7~b130~pre0) but it is not installed E: Unmet dependencies. Try using -f. so we run the last command twice: sudo dpkg --remove -force --force-remove-reinstreq ca-certificates-java and sudo dpkg --remove -force --force-remove-reinstreq openjdk-7-jre-lib but both of them gives: tzach@tzach-pc:~$ sudo dpkg --remove -force --force-remove-reinstreq ca-certificates-java [sudo] password for tzach: dpkg: error: conflicting actions -f (--field) and -r (--remove) Type dpkg --help for help about installing and deinstalling packages [*]; Use `dselect' or `aptitude' for user-friendly package management; Type dpkg -Dhelp for a list of dpkg debug flag values; Type dpkg --force-help for a list of forcing options; Type dpkg-deb --help for help about manipulating *.deb files; Options marked [*] produce a lot of output - pipe it through `less' or `more' ! EDIT FOR green7-output of "sudo apt-get -f install": tzach@tzach-pc:~$ sudo apt-get -f install [sudo] password for tzach: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: icedtea-7-jre-cacao icedtea-7-jre-jamvm java-common openjdk-7-jre-headless tzdata-java Suggested packages: default-jre equivs sun-java6-fonts ttf-dejavu-extra fonts-ipafont-gothic fonts-ipafont-mincho ttf-telugu-fonts ttf-oriya-fonts ttf-kannada-fonts ttf-bengali-fonts The following packages will be REMOVED: ttf-mscorefonts-installer The following NEW packages will be installed: icedtea-7-jre-cacao icedtea-7-jre-jamvm java-common openjdk-7-jre-headless tzdata-java 0 upgraded, 5 newly installed, 1 to remove and 355 not upgraded. 5 not fully installed or removed. Need to get 0 B/29.6 MB of archives. After this operation, 88.5 MB of additional disk space will be used. Do you want to continue [Y/n]? y debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: warning: there's no installed package matching ttf-mscorefonts-installer:amd64 Setting up tzdata (2012e-0ubuntu0.12.04) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing tzdata (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: tzdata E: Sub-process /usr/bin/dpkg returned an error code (1) EDIT2 FOR green7: tzach@tzach-pc:~$ sudo apt-get remove --purge tzdata [sudo] password for tzach: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: ca-certificates-java : Depends: openjdk-6-jre-headless (>= 6b16-1.6.1-2) but it is not going to be installed or java6-runtime-headless libc6 : Depends: tzdata but it is not going to be installed libc6:i386 : Depends: tzdata:i386 libical0 : Depends: tzdata but it is not going to be installed openjdk-7-jre-lib : Depends: openjdk-7-jre-headless (>= 7~b130~pre0) but it is not going to be installed python-dateutil : Depends: tzdata but it is not going to be installed ubuntu-minimal : Depends: tzdata but it is not going to be installed util-linux : Depends: tzdata (>= 2006c-2) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). EDIT3 FOR green7: tzach@tzach-pc:~$ sudo apt-get install openjdk-7-jre-headless [sudo] password for tzach: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: openjdk-7-jre-headless : Depends: tzdata-java but it is not going to be installed Depends: java-common (>= 0.28) but it is not going to be installed Recommends: icedtea-7-jre-cacao (= 7~u3-2.1.1~pre1-1ubuntu3) but it is not going to be installed Recommends: icedtea-7-jre-jamvm (= 7~u3-2.1.1~pre1-1ubuntu3) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). some things in the text also supposed to be bolded. but not critic (: Thanks for the editing! Thanks a lot for your assistance.

    Read the article

  • Formal Languages, Inductive Proofs &amp; Regular Expressions

    - by MarkPearl
    So I am slogging away at my UNISA stuff. I have just finished doing the initial once non stop read through the first 11 chapters of my COS 201 Textbook - “Introduction to Computer Theory 2nd Edition” by Daniel Cohen. It has been an interesting couple of days, with familiar concepts coming up as well as some new territory. In this posting I am going to cover the first couple of chapters of the book. Let start with Formal Languages… What exactly is a formal language? Pretty much a no duh question for me but still a good one to ask – a formal language is a language that is defined in a precise mathematical way. Does that mean that the English language is a formal language? I would say no – and my main motivation for this is that one can have an English sentence that is correct grammatically that is also ambiguous. For example the ambiguous sentence: "I once shot an elephant in my pyjamas.” For this and possibly many other reasons that I am unaware of, English is termed a “Natural Language”. So why the importance of formal languages in computer science? Again a no duh question in my mind… If we want computers to be effective and useful tools then we need them to be able to evaluate a series of commands in some form of language that when interpreted by the device no confusion will exist as to what we were requesting. Imagine the mayhem that would exist if a computer misinterpreted a command to print a document and instead decided to delete it. So what is a Formal Language made up of… For my study purposes a language is made up of a finite alphabet. For a formal language to exist there needs to be a specification on the language that will describe whether a string of characters has membership in the language or not. There are two basic ways to do this: By a “machine” that will recognize strings of the language (e.g. Finite Automata). By a rule that describes how strings of a language can be formed (e.g. Regular Expressions). When we use the phrase “string of characters”, we can also be referring to a “word”. What is an Inductive Proof? So I am not to far into my textbook and of course it starts referring to proofs and different types. I have had to go through several different approaches of proofs in the past, but I can never remember their formal names , so when I saw “inductive proof” I thought to myself – what the heck is that? Google to the rescue… An inductive proof is like a normal proof but it employs a neat trick which allows you to prove a statement about an arbitrary number n by first proving it is true when n is 1 and then assuming it is true for n=k and showing it is true for n=k+1. The idea is that if you want to show that someone can climb to the nth floor of a fire escape, you need only show that you can climb the ladder up to the fire escape (n=1) and then show that you know how to climb the stairs from any level of the fire escape (n=k) to the next level (n=k+1). Does this sound like a form of recursion? No surprise then that in the same chapter they deal with recursive definitions. An example of a recursive definition for the language EVEN would the 3 rules below: 2 is in EVEN If x is in EVEN then so is x+2 The only elements in the set EVEN are those that be produced by the rules above. Nothing to exciting… So if a definition for a language is done recursively, then it makes sense that the language can be proved using induction. Regular Expressions So I am wondering to myself what use is this all – in fact – I find this the biggest challenge to any university material is that it is quite hard to find the immediate practical applications of some theory in real life stuff. How great was my joy when I suddenly saw the word regular expression being introduced. I had been introduced to regular expressions on Stack Overflow where I was trying to recognize if some text measurement put in by a user was in a valid form or not. For instance, the imperial system of measurement where you have feet and inches can be represented in so many different ways. I had eventually turned to regular expressions as an easy way to check if my parser could correctly parse the text or not and convert it to a normalize measurement. So some rules about languages and regular expressions… Any finite language can be represented by at least one if not more regular expressions A regular expressions is almost a rule syntax for expressing how regular languages can be formed regular expressions are cool For a regular expression to be valid for a language it must be able to generate all the words in the language and no other words. This is important. It doesn’t help me if my regular expression parses 100% of my measurement texts but also lets one or two invalid texts to pass as well. Okay, so this posting jumps around a bit – but introduces some very basic fundamentals for the subject which will be built on in later postings… Time to go and do some practical examples now…

    Read the article

  • Oracle Database 12c: Oracle Multitenant Option

    - by hamsun
    1. Why ? 2. What is it ? 3. How ? 1. Why ? The main idea of the 'grid' is to share resources, to make better use of storage, CPU and memory. If a database administrator wishes to implement this idea, he or she must consolidate many databases to one database. One of the concerns of running many applications together in one database is: ‚what will happen, if one of the applications must be restored because of a human error?‘ Tablespace point in time recovery can be used for this purpose, but there are a few prerequisites. Most importantly the tablespaces are strictly separated for each application. Another reason for creating separated databases is security: each customer has his own database. Therefore, there is often a proliferation of smaller databases. Each of them must be maintained, upgraded, each allocates virtual memory and runs background processes thereby wasting resources. Oracle 12c offers another possibility for virtualization, providing isolation at the database level: the multitenant container database holding pluggable databases. 2. What ? Pluggable databases are logical units inside a multitenant container database, which consists of one multitenant container database and up to 252 pluggable databases. The SGA is shared as are the background processes. The multitenant container database holds metadata information common for pluggable databases inside the System and the Sysaux tablespace, and there is just one Undo tablespace. The pluggable databases have smaller System and Sysaux tablespaces, containing just their 'personal' metadata. New data dictionary views will make the information available either on pdb (dba_views) or container level (cdb_views). There are local users, which are known in specific pluggable databases and common users known in all containers. Pluggable databases can be easily plugged to another multitenant container database and converted from a non-CDB. They can undergo point in time recovery. 3. How ? Creating a multitenant container database can be done using the database configuration assistant: There you find the new option: Create as Container Database. If you prefer ‚hand made‘ databases you can execute the command from a instance in nomount state: CREATE DATABASE cdb1 ENABLE PLUGGABLE DATABASE …. And of course this can also be achieved through Enterprise Manager Cloud. A freshly created multitenant container database consists of two containers: the root container as the 'rack' and a seed container, a template for future pluggable databases. There are 4 ways to create other pluggable databases: 1. Create an empty pdb from seed 2. Plug in a non-CDB 3. Move a pdb from another pdb 4. Copy a pdb from another pdb We will discuss option2: how to plug in a non_CDB into a multitenant container database. Three different methods are available : 1. Create an empty pdb and use Datapump in traditional export/import mode or with Transportable Tablespace or Database mode. This method is suitable for pre 12c databases. 2. Create an empty pdb and use GoldenGate replication. When the pdb catches up with the non-CDB, you fail over to the pdb. 3. Databases of Version 12c or higher can be plugged in with the help of the new dbms_pdb Package. This is a demonstration for method 3: Step1: Connect to the non-CDB to be plugged in and create an xml File with description of the database. The xml file is written to $ORACLE_HOME/dbs per default and contains mainly information about the datafiles. Step 2: Check if the non-CDB is pluggable in the multitenant container database: Step 3: Create the pluggable database, connected to the Multitenant container database. With nocopy option the files will be reused, but the tempfile is created anew: A service is created and registered automatically with the listener: Step 4: Delete unnecessary metadata from PDB SYSTEM tablespace: To connect to newly created pdb, edit tnsnames.ora and add entry for new pdb. Connect to plugged-in non_CDB and clean up Data Dictionary to remove entries now maintained in multitenant container database. As all kept objects have to be recompiled it will take a few minutes. Step 5: The plugged-in database will be automatically synchronised by creating common users and roles when opened the first time in read write mode. Step 6: Verify tablespaces and users: There is only one local tablespace (users) and one local user (scott) in the plugged-in non_CDB pdb_orcl. This method of creating plugged_in non_CDB from is fast and easy for 12c databases. The method for deplugging a pluggable database from a CDB is to create a new non_CDB and use the the new full transportable feature of Datapump and drop the pluggable database. About the Author: Gerlinde has been working for Oracle University Germany as one of our Principal Instructors for over 14 years. She started with Oracle 7 and became an Oracle Certified Master for Oracle 10g and 11c. She is a specialist in Database Core Technologies, with profound knowledge in Backup & Recovery, Performance Tuning for DBAs and Application Developers, Datawarehouse Administration, Data Guard and Real Application Clusters.

    Read the article

  • FAT Volume and CE

    - by Kate Moss' Open Space
    Whenever we format a disk volume, it is a good idea to name the label so it will be easier to categorize. To label a volume, we can use LABEL command or UI depends on your preference. Windows CE does provide FAT driver and support various format (FAT12, FAT16,FAT32, ExFAT and TFAT - transaction-safe FAT) and many feature to let you scan and even defrag the volume but not labeling. At any time you format a volume in CE and then mount it on PC, the label is always empty! Of course, you can always label the volume on PC, even it is formatted in CE. So looks like CE does not care about the volume label at all, neither report the label to OS nor changing the label on FAT.So how can we set the volume label in CE? To Answer this question, we need to know how does FAT stores the volume label. Here are some on-line resources are handy for parsing FAT. http://en.wikipedia.org/wiki/File_Allocation_Table http://www.pjrc.com/tech/8051/ide/fat32.html http://www.microsoft.com/whdc/system/platform/firmware/fatgen.mspx You can refer to PUBLIC\COMMON\OAK\DRIVERS\FSD\FATUTIL\MAIN\bootsec.h and dosbpb.h or the above links for the fields we discuss here. The first sector of a FAT Volume (it is not necessary to be the first sector of a disk.) is a FAT boot sector and BPB (BIOS Parameter Block). And at offset 43, bgbsVolumeLabel (or bsVolumeLabel on FAT16) is for storing the volume lable, but note in the spec also indicates "FAT file system drivers should make sure that they update this field when the volume label file in the root directory has its name changed or created.". So we can't just simply update the bgbsVolumeLabel but also need to create a volume lable file in root directory. The volume lable file is not a real file but just a file entry in root directory with zero file lenth and a very special file attribute, ATTR_VOLUME_ID. (defined in public\common\oak\drivers\fsd\fatutil\MAIN\fatutilp.h) Locating and accessing bootsector is quite straight forward, as long as we know the starting sector of a FAT volume, that's it. But where is the root directory? The layout of a typical FAT is like this Boot sector (Volume ID in the figure) followed by Reserved Sectors (1 on FAT12/16 and 32 on FAT32), then FAT chain table(s) (can be 1 or 2), after that is the root directory (FAT12/16 and not shows in the figure) then begining of the File and Directories. In FAT12/16, the root directory is placed right after FAT so it is not hard to caculate the offset in the volume. But in FAT32, this rule is no longer true: the first cluster of the root directory is determined by BGBPB_RootDirStrtClus (or offset 44 in boot sector). Although this field is usually 0x00000002 (it is how CE initial the root directory after formating a volume. Note we should never assume it is always true) which means the first cluster contains data but not like the root directory is contiguous in FAT12/16, it is just like a regular file can be fragmented. So we need to access the root directory (of FAT32) hopping one cluster to another by traversing FAT table. Let's trace the code now. Although the source of FAT driver is not available in CE Shared Source program, but the formatter, Fatutil.dll, is available in public\common\oak\drivers\fsd\fatutil\MAIN\formatdisk.cpp. Be aware the public code only provides formatter for FAT12/16/32 for ExFAT it is still not available. FormatVolumeInternal is the main worker function. With the knowledge here, you should be able the trace the code easily. But I would like to discuss the following code pieces     dwReservedSectors = (fo.dwFatVersion == 32) ? 32 : 1;     dwRootEntries = (fo.dwFatVersion == 32) ? 0 : fo.dwRootEntries; Note the dwReservedSectors is 32 in FAT32 and 1 in FAT12/16. Root Entries is another different mentioned in previous paragraph, 0 for FAT32 (dynamic allocated) and fixed size (usually 512, defined in DEFAULT_ROOT_ENTRIES in public\common\sdk\inc\fatutil.h) And then here   memset(pBootSec->bsVolumeLabel, 0x20, sizeof(pBootSec->bsVolumeLabel)); It sets the Volume Label as empty string. Now let's carry on to the next section - write the root directory.     if (fo.dwFatVersion == 32) {         if (!(fo.dwFlags & FATUTIL_FORMAT_TFAT)) {             dwRootSectors = dwSectorsPerCluster;         }         else {             DIRENTRY    dirEntry;             DWORD       offset;             int               iVolumeNo;             memset(pbBlock, 0, pdi->di_bytes_per_sect);             memset(&dirEntry, 0, sizeof(DIRENTRY));                         dirEntry.de_attr = ATTR_VOLUME_ID;             // the first one is volume label             memcpy(dirEntry.de_name, "TFAT       ", sizeof (dirEntry.de_name));             memcpy(pbBlock, &dirEntry, sizeof(dirEntry));              ...             // Skip the next step of zeroing out clusters             dwCurrentSec += dwSectorsPerCluster;             dwRootSectors = 0;         }     }     // Each new root directory sector needs to be zeroed.     memset(pbBlock, 0, cbSizeBlk);     iRootSec=0;     while ( iRootSec < dwRootSectors) { Basically, the code zero out the each entry in root directory depends on dwRootSectors. In FAT12/16, the dwRootSectors is calculated as the sectors we need for the root entries (512 for most of the case) and in FAT32 it just zero out the one cluster. Please note that, if it is a TFAT volume, it initialize the root directory with special volume label entries for some special purpose. Despite to its unusual initialization process for TFAT, it does provide a example for how to create a volume entry. With some minor modification, we can assign the volume label in FAT formatter and also remember to sync the volume label with bsVolumeLabel or bgbsVolumeLabel in boot sector.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #005

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 SQL SERVER – Cursor to Kill All Process in Database I indeed wrote this cursor and when I often look back, I wonder how naive I was to write this. The reason for writing this cursor was to free up my database from any existing connection so I can do database operation. This worked fine but there can be a potentially big issue if there was any important transaction was killed by this process. There is another way to to achieve the same thing where we can use ALTER syntax to take database in single user mode. Read more about that over here and here. 2007 Rules of Third Normal Form and Normalization Advantage – 3NF The rules of 3NF are mentioned here Make a separate table for each set of related attributes, and give each table a primary key. If an attribute depends on only part of a multi-valued key, remove it to a separate table If attributes do not contribute to a description of the key, remove them to a separate table. Correct Syntax for Stored Procedure SP Sometime a simple question is the most important question. I often see in industry incorrectly written Stored Procedure. Few writes code after the most outer BEGIN…END and few writes code after the GO Statement. In this brief blog post, I have attempted to explain the same. 2008 Switch Between Result Pan and Query Pan – SQL Shortcut Many times when I am writing query I have to scroll the result displayed in the result set. Most of the developer uses the mouse to switch between and Query Pane and Result Pane. There are few developers who are crazy about Keyboard shortcuts. F6 is the keyword which can be used to switch between query pane and tabs of the result pane. Interesting Observation – Use of Index and Execution Plan Query Optimization is a complex game and it has its own rules. From the example in the article we have discovered that Query Optimizer does not use clustered index to retrieve data, sometime non clustered index provides optimal performance for retrieving Primary Key. When all the rows and columns are selected Primary Key should be used to select data as it provides optimal performance. 2009 Interesting Observation – TOP 100 PERCENT and ORDER BY If you pull up any application or system where there are more than 100 SQL Server Views are created – I am very confident that at one or two places you will notice the scenario wherein View the ORDER BY clause is used with TOP 100 PERCENT. SQL Server 2008 VIEW with ORDER BY clause does not throw an error; moreover, it does not acknowledge the presence of it as well. In this article we have taken three perfect examples and demonstrated which clause we should use when. Comma Separated Values (CSV) from Table Column A Very common question – How to create comma separated values from a table in the database? The answer is also very common if we use XML. Check out this article for quick learning on the same subject. Azure Start Guide – Step by Step Installation Guide Though Azure portal has changed a quite bit since I wrote this article, the concept used in this article are not old. They are still valid and many of the functions are still working as mentioned in the article. I believe this one article will put you on the track to use Azure! Size of Index Table for Each Index – Solution Earlier I have posted a small question on this blog and requested help from readers to participate here and provide a solution. The puzzle was to write a query that will return the size for each index that is on any particular table. We need a query that will return an additional column in the above listed query and it should contain the size of the index. This article presents two of the best solutions from the puzzle. 2010 Well, this week in 2010 was the week of puzzles as I posted three interesting puzzles. Till today I am noticing pretty good interesting in the puzzles. They are tricky but for sure brings a great value if you are a database developer for a long time. I suggest you go over this puzzles and their answers. Did you really know all of the answers? I am confident that reading following three blog post will for sure help you enhance the experience with T-SQL. SQL SERVER – Challenge – Puzzle – Usage of FAST Hint SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal SQL SERVER – Challenge – Puzzle – Why does RIGHT JOIN Exists 2011 DVM sys.dm_os_sys_info Column Name Changed in SQL Server 2012 Have you ever faced a situation where something does not work? When you try to fix it - you enjoy fixing it and started to appreciate the breaking changes. Well, this was exactly I felt yesterday. Before I begin my story, I want to candidly state that I do not encourage anybody to use * in the SELECT statement. Now the disclaimer is over – I suggest you read the original story – you will love it! Get Directory Structure using Extended Stored Procedure xp_dirtree Here is the question to you – why would you do something in SQL Server where you can do the same task in command prompt much easily. Well, the answer is sometime there are real use cases when we have to do such thing. This is a similar example where I have demonstrated how in SQL Server 2012 we can use extended stored procedure to retrieve directory structure. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Running a Silverlight application in the Google App Engine platform

    - by rajbk
    This post shows you how to host a Silverlight application in the Google App Engine (GAE) platform. You deploy and host your Silverlight application on Google’s infrastructure by creating a configuration file and uploading it along with your application files. I tested this by uploading an old demo of mine - the four stroke engine silverlight demo. It is currently being served by the GAE over here: http://fourstrokeengine.appspot.com/ The steps to run your Silverlight application in GAE are as follows: Account Creation Create an account at http://appengine.google.com/. You are allocated a free quota at signup. Select “Create an Application”   Verify your account by SMS   Create your application by clicking on “Create an Application”   Pick an application identifier on the next screen. The identifier has to be unique. You will use this identifier when uploading your application. The application you create will by default be accessible at [applicationidentifier].appspot.com. You can also use custom domains if needed (refer to the docs).   Save your application. Download SDK  We will use the  Windows Launcher for Google App Engine tool to upload our apps (it is possible to do the same through command line). This is a GUI for creating, running and deploying applications. The launcher lets you test the app locally before deploying it to the GAE. This tool is available in the Google App Engine SDK. The GUI is written in Python and therefore needs an installation of Python to run. Download and install the Python Binaries from here: http://www.python.org/download/ Download and install the Google App Engine SDK from here: http://code.google.com/appengine/downloads.html Run the GAE Launcher. Select Create New Application.   On the next dialog, give your application a name (this must match the identifier we created earlier) For Parent Directory, point to the directory containing your Silverlight files. Change the port if you want to. The port is used by the GAE local web server. The server is started if you choose to run the application locally for testing purposes. Hit Save. Configure, Test and Upload As shown below, the files I am interested in uploading for my Silverlight demo app are The html page used to host the Silverlight control The xap file containing the compiled Silverlight application A favicon.ico file.   We now create a configuration file for our application called app.yaml. The app.yaml file specifies how URL paths correspond to request handlers and static files.  We edit the file by selecting our app in the GUI and clicking “Edit” The contents of file after editing is shown below (note that the contents of the file should be in plain text): application: fourstrokeengine version: 1 runtime: python api_version: 1 handlers: - url: /   static_files: Default.html   upload: Default.html - url: /favicon.ico   static_files: favicon.ico   upload: favicon.ico - url: /FourStrokeEngine.xap   static_files: FourStrokeEngine.xap   upload: FourStrokeEngine.xap   mime_type: application/x-silverlight-app - url: /.*   static_files: Default.html   upload: Default.html We have listed URL patterns for our files, specified them as static files and specified a mime type for our xap file. The wild card URL at the end will match all URLs that are not found to our default page (you would normally include a html file that displays a 404 message).  To understand more about app.yaml, refer to this page. Save the file. Run the application locally by selecting “Browse” in the GUI. A web server listening on the port you specified is started (8080 in my case). The app is loaded in your default web browser pointing to http://localhost:8080/. Make sure the application works as expected. We are now ready to deploy. Click the “Deploy” icon. You will be prompted for your username and password. Hit OK. The files will get uploaded and you should get a dialog telling you to “close the window”. We are done uploading our Silverlight application. Go to http://appengine.google.com/ and launch the application by clicking on the link in the “Current Version” column.   You should be taken to a URL which points to your application running in Google’s infrastructure : http://fourstrokeengine.appspot.com/. We are done deploying our application! Clicking on the link in the Application column will take you to the Admin console where you can see stats related to system usage.  To learn more about the Google Application Engine, go here: http://code.google.com/appengine/docs/whatisgoogleappengine.html

    Read the article

  • IOUG and Oracle Enterprise Manager User Community Twitter Chat and Sessions at OpenWorld

    - by Anand Akela
    Like last many years, we will have annual Oracle Users Forum on Sunday, September 30th, 2012 at Moscone West, Levels 2 & 3 . It will be open to all registered attendees of Oracle Open World and conferences running from September 29 to October 5, 2012 . This will be a great  opportunity to meet with colleagues, peers, and subject matter experts to share best practices, tips, and techniques around Oracle technologies. You could sit in on a special interest group (SIG) meeting or session and learn how to get more out of Oracle technologies and applications. IOUG and Oracle Enterprise Manager team invites you to join a Twitter Chat on Sunday, Sep. 30th from 11:30 AM to 12:30 PM.  IOUG leaders, Enterprise Manager SIG contributors and many Oracle Users Forum speakers will answer questions related to their experience with Oracle Enterprise Manager and the activities and resources available for  Enterprise Manager SIG members. You can participate in the chat using hash tag #em12c on Twitter.com or by going to  tweetchat.com/room/em12c      (Needs Twitter credential for participating).  Feel free to join IOUG and Enterprise team members at the User Group Pavilion on 2nd Floor, Moscone West. Here is the complete list of Oracle Enterprise Manager sessions during the Oracle Users Forum : Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Time Session Title Speakers Location 8:00AM - 8:45AM UGF4569 - Oracle RAC Migration with Oracle Automatic Storage Management and Oracle Enterprise Manager 12c VINOD Emmanuel -Database Engineering, Dell, Inc. Wendy Chen - Sr. Systems Engineer, Dell, Inc. Moscone West - 2011 8:00AM - 8:45AM UGF10389 -  Monitoring Storage Systems for Oracle Enterprise Manager 12c Anand Ranganathan - Product Manager, NetApp Moscone West - 2016 9:00AM - 10:00AM UGF2571 - Make Oracle Enterprise Manager Sing and Dance with the Command-Line Interface Ray Smith - Senior Database Administrator, Portland General Electric Moscone West - 2011 10:30AM - 11:30AM UGF2850 - Optimal Support: Oracle Enterprise Manager 12c Cloud Control, My Oracle Support, and More April Sims - DBA, Southern Utah University Moscone West - 2011 11:30AM - 12:30PM IOUG and Oracle Enterprise Manager Joint Tweet Chat  Join IOUG Leaders, IOUG's Enterprise Manager SIG Contributors and Speakers on Twitter and ask questions related to practitioner's experience with Oracle Enterprise Manager and the new IOUG 's Enterprise Manager SIG. To attend and participate in the chat, please use hash tag #em12c on twitter.com or your favorite Twitter client. You can also go to tweetchat.com/room/em12c to watch the conversation or login with your twitter credentials to ask questions. User Group Pavilion 2nd Floor, Moscone West 12:30PM-2:00PM UGF5131 - Migrating from Oracle Enterprise Manager 10g Grid Control to 12c Cloud Control    Leighton Nelson - Database Administrator, Mercy Moscone West - 2011 2:15PM-3:15PM UGF6511 -  Database Performance Tuning: Get the Best out of Oracle Enterprise Manager 12c Cloud Control Mike Ault - Oracle Guru, TEXAS MEMORY SYSTEMS INC Tariq Farooq - CEO/Founder, BrainSurface Moscone West - 2011 3:30PM-4:30PM UGF4556 - Will It Blend? Verifying Capacity in Server and Database Consolidations Jeremiah Wilton - Database Technology, Blue Gecko / DatAvail Moscone West - 2018 3:30PM-4:30PM UGF10400 - Oracle Enterprise Manager 12c: Monitoring, Metric Extensions, and Configuration Best Practices Kellyn Pot'Vin - Sr. Technical Consultant, Enkitec Moscone West - 2011 Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Back Up to Tape the Way You Shop For Groceries

    - by rickramsey
    Imagine if this was how you shopped for groceries: From the end of the aisle sprint to the point where you reach the ketchup. Pull a bottle from the shelf and yell at the top of your lungs, “Got it!” Sprint back to the end of the aisle. Start again and sprint down the same aisle to the mustard, pull a bottle from the shelf and again yell for the whole store to hear, “Got it!” Sprint back to the end of the aisle. Repeat this procedure for every item you need in the aisle. Proceed to the next aisle and follow the same steps for the list of items you need from that aisle. Sounds ridiculous, doesn’t it? Not only is it horribly inefficient, it’s exhausting and can lead to wear out failures on your grocery cart, or worse, yourself. This is essentially how NetApp and some other applications write NDMP backups to tape. In the analogy, the ketchup and mustard are the files to be written, yelling “Got it!” is the equivalent of a sync mark at the end of a file, and the sprint back to the end of an aisle is the process most commonly called a “backhitch” where the drive has to back up on a tape to start writing again. Writing to tape in this way results in very slow tape drive performance and imposes unnecessary wear on the tape drive and the media, especially when writing small files. The good news is not all tape drives behave this way when writing small files. Unlike midrange LTO drives, Oracle’s StorageTek T10000D tape drive is designed to handle this scenario efficiently. The difference between the two drive types is that the T10000D drive gives you the ability to write files in a NetApp NDMP backup environment the way you would normally shop for groceries. With grocery shopping, you essentially stream through aisles picking up items as you go, and then after checking out, yell, “Got it!”, though you might do that last step silently. With the T10000D, it has a feature called the Tape Application Accelerator, which prevents the drive from having to stop after each file is written to notify NetApp or another application that the write was successful. When enabled in the T10000D tape drive, Tape Application Accelerator causes the tape drive to respond to tape mark and file sync commands differently than when disabled: A tape mark received by the tape drive is treated as a buffered tape mark. A file sync received by the tape drive is treated as a no op command. Since buffered tape marks and no op commands do not cause the tape drive to empty the contents of its buffer to tape and backhitch, the data is written to tape in significantly less time. Oracle has emulated NetApp environments with a number of different file sizes and found the following when comparing the T10000D with the Tape Application Accelerator enabled versus LTO6 tape drives. Notice how the T10000D is not only monumentally faster, but also remarkably consistent? In addition, the writing of the 50 GB of files is done without a single backhitch. The LTO6 drive, meanwhile, will perform as many as 3,800 backhitches! At the end of writing the entire set of files, the T10000D tape drive reports back to the application, in this case NetApp, that the write was successful via a tape mark. So if the Tape Application Accelerator dramatically improves performance and reliability, why wouldn’t you always have it enabled? The reason is because tape drive buffers are meant to be just temporary data repositories so in the event of a power loss, there could be data loss in certain environments for the files that resided in the buffer. Fortunately, we do have best practices depending on your environment to avoid this from happening. I highly recommend reading Maximizing Tape Performance with StorageTek T10000 Tape Drives (pdf) to decide which best practice is right for you. The white paper also digs deeper into the benefits of the Tape Application Accelerator. The white paper is free, and after downloading it you can decide for yourself whether you want to yell “Got it!” out loud or just silently to yourself. Customer Advisory Panel One final link: Oracle has started up a Customer Advisory Panel program to collect feedback from customers on their current experiences with Oracle products, as well as desires for future product development. If you would like to participate in the program, go to this link at oracle.com. photo taken on Idaho's Sacajewea Historic Biway by Rick Ramsey - Brian Zents Follow OTN on Blog | Facebook | Twitter | YouTube

    Read the article

  • How to get sound on macbook pro 4,1

    - by Thomas
    I have just installed Xubuntu 12.04.2. My soundcard is detected: thomas@thomas-pc:~$ sudo aplay -l **** List of PLAYBACK Hardware Devices **** Home directory /home/thomas not ours. card 0: Intel [HDA Intel], device 0: ALC889A Analog [ALC889A Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 1: ALC889A Digital [ALC889A Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 Everything is put to max in alsamixer and nothing is muted (all the sliders are on OO. My speakers do not work, but when I plug in a headphone I hear it very soft. When I connect my stereo and put the sound VERY loud (3-blocks-of-complaining-neighbours loud) I hear it on a normal level but crackling. I added options snd-hda-intel model=mbp5 amixer set IEC958 off to at the end of /etc/modprobe.d/alsa-base.conf. When it's still not working I tried everything here: https://help.ubuntu.com/community/SoundTroubleshooting 1 >>> list-sinks 1 sink(s) available. * index: 0 name: <alsa_output.pci-0000_00_1b.0.analog-stereo> driver: <module-alsa-card.c> flags: HARDWARE HW_MUTE_CTRL HW_VOLUME_CTRL DECIBEL_VOLUME LATENCY DYNAMIC_LATENCY state: SUSPENDED suspend cause: IDLE priority: 9959 volume: 0: 100% 1: 100% 0: 0.00 dB 1: 0.00 dB balance 0.00 base volume: 100% 0.00 dB volume steps: 65537 muted: no current latency: 0.00 ms max request: 0 KiB max rewind: 0 KiB monitor source: 0 sample spec: s16le 2ch 44100Hz channel map: front-left,front-right Stereo used by: 0 linked by: 0 configured latency: 0.00 ms; range is 0.50 .. 371.52 ms card: 0 <alsa_card.pci-0000_00_1b.0> module: 4 properties: alsa.resolution_bits = "16" device.api = "alsa" device.class = "sound" alsa.class = "generic" alsa.subclass = "generic-mix" alsa.name = "ALC889A Analog" alsa.id = "ALC889A Analog" alsa.subdevice = "0" alsa.subdevice_name = "subdevice #0" alsa.device = "0" alsa.card = "0" alsa.card_name = "HDA Intel" alsa.long_card_name = "HDA Intel at 0x9b500000 irq 46" alsa.driver_name = "snd_hda_intel" device.bus_path = "pci-0000:00:1b.0" sysfs.path = "/devices/pci0000:00/0000:00:1b.0/sound/card0" device.bus = "pci" device.vendor.id = "8086" device.vendor.name = "Intel Corporation" device.product.name = "82801H (ICH8 Family) HD Audio Controller" device.form_factor = "internal" device.string = "front:0" device.buffering.buffer_size = "65536" device.buffering.fragment_size = "32768" device.access_mode = "mmap+timer" device.profile.name = "analog-stereo" device.profile.description = "Analog Stereo" device.description = "Built-in Audio Analog Stereo" alsa.mixer_name = "Realtek ALC889A" alsa.components = "HDA:10ec0885,106b3a00,00100103" module-udev-detect.discovered = "1" device.icon_name = "audio-card-pci" ports: analog-output-speaker: Speakers (priority 10000, available: unknown) properties: analog-output-headphones: Headphones (priority 9000, available: no) properties: active port: <analog-output-speaker> 2 and 3: Doesn't seem an permission issue, the sound is very far away (See opening paragraph). 4 thomas@thomas-pc:~$ sudo aplay -l **** List of PLAYBACK Hardware Devices **** Home directory /home/thomas not ours. card 0: Intel [HDA Intel], device 0: ALC889A Analog [ALC889A Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 1: ALC889A Digital [ALC889A Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 5 thomas@thomas-pc:~$ find /lib/modules/`uname -r` | grep snd /lib/modules/3.2.0-48-generic/kernel/sound/core/snd-hwdep.ko /lib/modules/3.2.0-48-generic/kernel/sound/core/snd-pcm.ko [.. huge lists continues ..] /lib/modules/3.2.0-48-generic/kernel/sound/pcmcia/pdaudiocf/snd-pdaudiocf.ko /lib/modules/3.2.0-48-generic/kernel/sound/pcmcia/vx/snd-vxpocket.ko thomas@thomas-pc:~$ 6 thomas@thomas-pc:~$ lspci -v | grep -A7 -i "audio" 00:1b.0 Audio device: Intel Corporation 82801H (ICH8 Family) HD Audio Controller (rev 03) Subsystem: Apple Inc. Device 00a4 Flags: bus master, fast devsel, latency 0, IRQ 46 Memory at 9b500000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 7 I guess it's supported. Linux mint and Xubuntu 13.04 had no trouble with sounds. Everything worked out of the box Thanks in advance Edit: alsa-info.sh output: WARNING: /etc/modprobe.d/alsa-base.conf line 45: ignoring bad line starting with 'amixer' ALSA Information Script v 0.4.62 -------------------------------- This script visits the following commands/files to collect diagnostic information about your ALSA installation and sound related hardware. dmesg lspci lsmod aplay amixer alsactl /proc/asound/ /sys/class/sound/ ~/.asoundrc (etc.) See './alsa-info.sh --help' for command line options. WARNING: /etc/modprobe.d/alsa-base.conf line 45: ignoring bad line starting with 'amixer' Automatically upload ALSA information to www.alsa-project.org? [y/N] : y Uploading information to www.alsa-project.org ... Done! Your ALSA information is located at http://www.alsa-project.org/db/?f=6cffc584284d4c0b266eb53249824ef83d6c4e3e Please inform the person helping you. thomas@thomas-pc:~$

    Read the article

  • Dual Boot Oracle Solaris 11/11 and Linux (Ubuntu 11.10/grub2)

    - by HartmutStreppel
    After having worked with Open Solaris on my laptop first, then with an upgrade to Oracle Solaris 11 Express, I finally did a fresh install of Oracle Solaris 11/11, when it became available. I am not a big fan of upgrades as I know that I am not the perfect administrator and my system gets spoiled with unclean configurations, outdated packages and wrong settings that cannot be reversed. So I prefer to start from scratch. Especially with Oracle Solaris 11 I wanted to have a system just like a customer would have it in production. The installation was smooth - more or less, if I had only read the documentation a bit better in advance. For a number of reasons I prefer a dual boot system. The most important one is, that especially with mobile devices you often run into network problems. And you have a hard time figuring out where the problem is: in your laptop hardware, in the OS you are running, or really within the network. If you have an alternate OS to boot, you can exclude the OS and your hardware. This makes you feel better. The second OS should be a Linux variant - and for some not so obvious reason I decided to go with the latest Ubuntu release (11.10). It replaced a very old Open Suse installation that had not been booted for a while. I knew that it was probably best to install Ubuntu first and then Oracle Solaris 11, as this would put the right boot information for Oracle Solaris  into the MBR and onto the root partition. But then, how to enable dual boot with the 2 OSes. Searching the web one mainly finds information about dual boot of: Linux and Linux Linux and Windows I do not want to explain which wrong configurations I worked through, but I prefer to explain the final setup, which is extremely simple, and I am wondering why this is not covered as the easiest solution for most dual boot setups. I use chainloader from and to both OS'es, with the only disadvantage that I have to confirm two grub menus each time I want to boot the "other" OS. Still there were some hurdles to jump over: Ubuntu did not like getting its boot blocks being placed on the partition instead of the disk; I must admit that I do not fully understand why. But using the --force option you could get that done Ubuntu needs an active partition; that was easy to achieve grub2 uses a different numbering scheme for the partitions. That is in the docs, if you read them. BTW: The usual disclaimer is valid. There is  no guarantee that what I describe works or works well. Please back up your data carefully before trying any of this. So, Oracle Solaris 11 is installed on the first partition and Ubuntu on the third. With Ubtuntu things initially were a bit more complicated, as I did not know how to boot it. And the live CD did not offer the capability to boot the on-disk image (at least I did not find it). So I booted the live CD, mounted the Ubuntu installation at /mnt and wrote the boot blocks into the partition. This is something that does not seem to be recommended, at least grub-install refrained from doing what I intended. After a bit more research I was bold enough to use the --force option and wrote the boot blocks to /dev/sda3 using grub-install --boot-directory=/mnt/boot --force --no-floppy /dev/sda3 So, I now had a system with the Solaris boot loader in the MBR, Solaris specific boot blocks on the Solaris root partition and Ubuntu specific boot blocks in the Ubuntu partition. I just had to chain them together and I was done. Oracle Solaris 11: I have added the following lines to /rpool/boot/grub/menu.lst (be aware of the /rpool!!!!) title Ubuntu 11.10root (hd0,2)makeactivechainloader +1boot The Ubuntu root file system sits on the third partition (/dev/sda3). Ubuntu: I have added the following lines to /etc/grub.d/40_custom: menuentry "Solaris 11/11" {      set root=(hd0,1)      chainloader +1} Two things need to be mentioned: a) grub2 starts numbering partitions with 1; so my /dev/sda1 is partition 1. b) Oracle Solaris boots without the partition being made active (btw: the command to make a partition active with grub2 is "parttool (hd0,1) boot+", which currently does not work for me). As debugging grub is a bit complicated, I used the grub CLI to perform some tests and also used a tool, that I found on sourceforge.net that was able to prepare a list of all boot loaders on all partitions. This told me that the basic setup was correct. Unfortunately I lost it in the live CD environment. I hope this is helpful for some of the readers.Hartmut

    Read the article

  • Creating Visual Studio projects that only contain static files

    - by Eilon
    Have you ever wanted to create a Visual Studio project that only contained static files and didn’t contain any code? While working on ASP.NET MVC we had a need for exactly this type of project. Most of the projects in the ASP.NET MVC solution contain code, such as managed code (C#), unit test libraries (C#), and Script# code for generating our JavaScript code. However, one of the projects, MvcFuturesFiles, contains no code at all. It only contains static files that get copied to the build output folder: As you may well know, adding static files to an existing Visual Studio project is easy. Just add the file to the project and in the property grid set its Build Action to “Content” and the Copy to Output Directory to “Copy if newer.” This works great if you have just a few static files that go along with other code that gets compiled into an executable (EXE, DLL, etc.). But this solution does not work well if the projects only contains static files and has no compiled code. If you create a new project in Visual Studio and add static files to it you’ll still get an EXE or DLL copied to the output folder, despite not having any actual code. We wanted to avoid having a teeny little DLL generated in the output folder. In ASP.NET MVC 2 we came up with a simple solution to this problem. We started out with a regular C# Class Library project but then edited the project file to alter how it gets built. The critical part to get this to work is to define the MSBuild targets for Build, Clean, and Rebuild to perform custom tasks instead of running the compiler. The Build, Clean, and Rebuild targets are the three main targets that Visual Studio requires in every project so that the normal UI functions properly. If they are not defined then running certain commands in Visual Studio’s Build menu will cause errors. Once you create the class library projects there are a few easy steps to change it into a static file project: The first step in editing the csproj file is to remove the reference to the Microsoft.CSharp.targets file because the project doesn’t contain any C# code: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The second step is to define the new Build, Clean, and Rebuild targets to delete and then copy the content files: <Target Name="Build"> <Copy SourceFiles="@(Content)" DestinationFiles="@(Content->'$(OutputPath)%(RelativeDir)%(Filename)%(Extension)')" /> </Target> <Target Name="Clean"> <Exec Command="rd /s /q $(OutputPath)" Condition="Exists($(OutputPath))" /> </Target> <Target Name="Rebuild" DependsOnTargets="Clean;Build"> </Target> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The third and last step is to add all the files to the project as normal Content files (as you would do in any project type). To see how we did this in the ASP.NET MVC 2 project you can download the source code and inspect the MvcFutureFules.csproj project file. If you’re working on a project that contains many static files I hope this solution helps you out!

    Read the article

  • Microsoft Sql Server driver for Nodejs - Part 2

    - by chanderdhall
    Nodejs, Sql server and Json response with Rest This post is part 2 of Microsoft Sql Server driver for Node js.In this post we will look at the JSON responses from the Microsoft Sql Server driver for Node js. Pre-requisites: If you have read the Part 1 of the series, you should be good. We will be using a framework for Rest within Nodejs - Restify, but that would need no prior learning. Restify: Restify is a simple node module for building RESTful services. It is slimmer than Express. Express is a complete module that has all what you need to create a full-blown browser app. However, Restify does not have additional overhead of templating, rendering etc that would be needed if your app has views. So, as the name suggests it's an awesome framework for building RESTful services and is very light-weight. Set up - You can continue with the same directory or project structure we had in the previous post, or can start a new one. Install restify using npm and you are good to go. npm install restify Go to Server.js and include Restify in your solution. Then create the server object using restify.CreateServer() - SLICK - ha? var restify = require('restify'); var server = restify.createServer(); server.listen(8080, function () { console.log('%s listening at %s', server.name, server.url); }); Then make sure you provide a port for the Server to listen at. The call back function is optional but helps you for debugging purposes. Once you are done, save the file and then go to the command prompt and hit 'node server.js' and you should see the following:   To test the server, go to your browser and type the address 'http://localhost:8080/' and oops you will see an error.   Why is that? - Well because we haven't defined any routes. Let's go ahead and create a route. To begin with I'd like to return whatever is typed in the url after my name and the following code should do it. server.get('/ChanderDhall/:status', function respond(req, res, next) { res.end("hello " + req.params.name + "") }); You can also avoid writing call backs inline. Something like this. function respond(req, res, next) { res.end("Chander Dhall " + req.params.name + ""); } server.get('/hello/:name', respond); Now if you go ahead and type http://localhost:8080/ChanderDhall/LovesNode you will get the response 'Chander Dhall loves node'. NOTE: Make sure your url has the right case as it's case-sensitive. You could have also typed it in as 'server.get('/chanderdhall/:name', respond);' Stored procedure: We've talked a lot about Restify now, but keep in mind the post is about being able to use Sql server with Node and return JSON. To see this in action, let's go ahead and create another route to a list of Employees from a stored procedure. server.get('/Employees', Employees); The following code will return a JSON response.  function Employees(req, res, next) { res.header("Content-Type: application/json"); //Need to specify the Content-Type which is //JSON in our case. sql.open(conn_str, function (err, conn) { if (err) { //Logs an error console.log("Error opening the database connection!"); return; } console.log("before query!"); conn.queryRaw("exec sp_GetEmployees", function (err, results) { if (err) { //Connection is open but an error occurs whileWhat else can be done? May be create a formatter or may be even come up with a hypermedia type but that may upset some pragmatists. Well, that's going to be a totally different discussion and is really not part of this series. Summary: We've discussed how to execute a stored procedure using Microsoft Sql Server driver for Node. Also, we have discussed how to format and send out a clean JSON to the app calling this API.  

    Read the article

  • SQL SERVER – An Efficiency Tool to Compare and Synchronize SQL Server Databases

    - by Pinal Dave
    There is no need to reinvent the wheel if it is already invented and if the wheel is already available at ease, there is no need to wait to grab it. Here is the similar situation. I came across a very interesting situation and I had to look for an efficient tool which can make my life easier and solve my business problem. Here is the scenario. One of the developers had deleted few rows from the very important mapping table of our development server (thankfully, it was not the production server). Though it was a development server, the entire development team had to stop working as the application started to crash on every page. Think about the lost of manpower and efficiency which we started to loose.  Pretty much every department had to stop working as our internal development application stopped working. Thankfully, we even take a backup of our development server and we had access to full backup of the entire database at 6 AM morning. We do not take as a frequent backup of development server as production server (naturally!). Even though we had a full backup, the solution was not to restore the database. Think about it, there were plenty of the other operations since the last good full backup and if we restore a full backup, we will pretty much overwrite on the top of the work done by developers since morning. Now, as restoring the full backup was not an option we decided to restore the same database on another server. Once we had restored our database to another server, the challenge was to compare the table from where the database was deleted. The mapping table from where the data were deleted contained over 5000 rows and it was humanly impossible to compare both the tables manually. Finally we decided to use efficiency tool dbForge Data Compare for SQL Server from DevArt. dbForge Data Compare for SQL Server is a powerful, fast and easy to use SQL compare tool, capable of using native SQL Server backups as metadata source. (FYI we Downloaded dbForge Data Compare) Once we discovered the product, we immediately downloaded the product and installed on our development server. After we installed the product, we were greeted with the following screen. We clicked on the New Data Comparision to start our new comparison project. It brought up following screen. Here is the best part of the product, we just had to enter our database connection username and password along with source and destination details and we are done. The entire process is very simple and self intuiting. The best part was that for the source, we can either select database or even backup. This was indeed fantastic feature. Think about this, if you have a very big database, it will take long time to restore on the server. Once it is restored, you will be able to work with it. However, when you are working with dbForge Data Compare it will accept database backup as your source or destination. Once I click on the execute it brought up following screen where it displayed an excellent summary of the data compare. It has dedicated tabs for the what is changing in what table as well had details of the changed data. The best part is that, once we had reviewed the change. We click on the Synchronize button in the menu bar and it brought up following screen. You can see that the screen has very simple straight forward but very powerful features. You can generate a script to synchronize from target to source or even from source to target. Additionally, the database is a very complicated world and there are extensive options to configure various database options on the next screen. We also have the option to either generate script or directly execute the script to target server. I like to play on the safe side and I generated the script for my synchronization and later on after review I deployed the scripts on the server. Well, my team and we were able to get going from our disaster in less than 10 minutes. There were few people in our team were indeed disappointed as they were thinking of going home early that day but in less than 10 minutes they had to get back to work. There are so many other features in  dbForge Data Compare for SQL Server, I am already planning to make this product company wide recommended product for Data Compare tool. Hats off to the team who have build this product. Here are few of the features salient features of the dbForge Data Compare for SQL Server Perform SQL Server database comparison to detect changes Compare SQL Server backups with live databases Analyze data differences between two databases Synchronize two databases that went out of sync Restore data of a particular table from the backup Generate data comparison reports in Excel and HTML formats Copy look-up data from development database to production Automate routine data synchronization tasks with command-line interface Go Ahead and Download the dbForge Data Compare for SQL Server right away. It is always a good idea to get familiar with the important tools before hand instead of learning it under pressure of disaster. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • GitHub Integration in Windows Azure Web Site

    - by Shaun
    Microsoft had just announced an update for Windows Azure Web Site (a.k.a. WAWS). There are four major features added in WAWS which are free scaling mode, GitHub integration, custom domain and multi branches. Since I ‘m working in Node.js and I would like to have my code in GitHub and deployed automatically to my Windows Azure Web Site once I sync my code, this feature is a big good news to me.   It’s very simple to establish the GitHub integration in WAWS. First we need a clean WAWS. In its dashboard page click “Set up Git publishing”. Currently WAWS doesn’t support to change the publish setting. So if you have an existing WAWS which published by TFS or local Git then you have to create a new WAWS and set the Git publishing. Then in the deployment page we can see now WAWS supports three Git publishing modes: - Push my local files to Windows Azure: In this mode we will create a new Git repository on local machine and commit, publish our code to Windows Azure through Git command or some GUI. - Deploy from my GitHub project: In this mode we will have a Git repository created on GitHub. Once we publish our code to GitHub Windows Azure will download the code and trigger a new deployment. - Deploy from my CodePlex project: Similar as the previous one but our code would be in CodePlex repository.   Now let’s back to GitHub and create a new publish repository. Currently WAWS GitHub integration only support for public repositories. The private repositories support will be available in several weeks. We can manage our repositories in GitHub website. But as a windows geek I prefer the GUI tool. So I opened the GitHub for Windows, login with my GitHub account and select the “github” category, click the “add” button to create a new repository on GitHub. You can download the GitHub for Windows here. I specified the repository name, description, local repository, do not check the “Keep this code private”. After few seconds it will create a new repository on GitHub and associate it to my local machine in that folder. We can find this new repository in GitHub website. And in GitHub for Windows we can also find the local repository by selecting the “local” category.   Next, we need to associate this repository with our WAWS. Back to windows developer portal, open the “Deploy from my GitHub project” in the deployment page and click the “Authorize Windows Azure” link. It will bring up a new windows on GitHub which let me allow the Windows Azure application can access your repositories. After we clicked “Allow”, windows azure will retrieve all my GitHub public repositories and let me select which one I want to integrate to this WAWS. I selected the one I had just created in GitHub for Windows. So that’s all. We had completed the GitHub integration configuration. Now let’s have a try. In GitHub for Windows, right click on this local repository and click “open in explorer”. Then I added a simple HTML file. 1: <html> 2: <head> 3: </head> 4: <body> 5: <h1> 6: I came from GitHub, WOW! 7: </h1> 8: </body> 9: </html> Save it and back to GitHub for Windows, commit this change and publish. This will upload our changes to GitHub, and Windows Azure will detect this update and trigger a new deployment. If we went back to azure developer portal we can find the new deployment. And our commit message will be shown as the deployment description as well. And here is the page deployed to WAWS.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

< Previous Page | 785 786 787 788 789 790 791 792 793 794 795 796  | Next Page >