Search Results

Search found 81445 results on 3258 pages for 'file command'.

Page 861/3258 | < Previous Page | 857 858 859 860 861 862 863 864 865 866 867 868  | Next Page >

  • Why can't i change the permissions of files I have access to?

    - by Erik
    I'm logged into a server as user "ubuntu" and I've got files that look like this: -rw-rw-r-- 1 www-data www-data 33150 2012-06-04 22:17 file-a.png -rw-rw-r-- 1 www-data www-data 36371 2012-06-04 22:15 file-b.png -rw-rw-r-- 1 www-data www-data 41439 2012-06-04 22:16 file-c.png the ubuntu user is a member of the group www-data: > groups unbuntu ubuntu : ubuntu www-data so shouldn't I be able to change other permissions since I have access to the file? I'm not an expert on the user/group stuff ... so this is just perplexing me. I'm trying to run: > chmod o-r * I realize I can do it with sudo, easily, but I'm trying to understand why I can't modify the files without sudo. Thanks for any help!

    Read the article

  • Observations in Migrating from JavaFX Script to JavaFX 2.0

    - by user12608080
    Observations in Migrating from JavaFX Script to JavaFX 2.0 Introduction Having been available for a few years now, there is a decent body of work written for JavaFX using the JavaFX Script language. With the general availability announcement of JavaFX 2.0 Beta, the natural question arises about converting the legacy code over to the new JavaFX 2.0 platform. This article reflects on some of the observations encountered while porting source code over from JavaFX Script to the new JavaFX API paradigm. The Application The program chosen for migration is an implementation of the Sudoku game and serves as a reference application for the book JavaFX – Developing Rich Internet Applications. The design of the program can be divided into two major components: (1) A user interface (ideally suited for JavaFX design) and (2) the puzzle generator. For the context of this article, our primary interest lies in the user interface. The puzzle generator code was lifted from a sourceforge.net project and is written entirely in Java. Regardless which version of the UI we choose (JavaFX Script vs. JavaFX 2.0), no code changes were required for the puzzle generator code. The original user interface for the JavaFX Sudoku application was written exclusively in JavaFX Script, and as such is a suitable candidate to convert over to the new JavaFX 2.0 model. However, a few notable points are worth mentioning about this program. First off, it was written in the JavaFX 1.1 timeframe, where certain capabilities of the JavaFX framework were as of yet unavailable. Citing two examples, this program creates many of its own UI controls from scratch because the built-in controls were yet to be introduced. In addition, layout of graphical nodes is done in a very manual manner, again because much of the automatic layout capabilities were in flux at the time. It is worth considering that this program was written at a time when most of us were just coming up to speed on this technology. One would think that having the opportunity to recreate this application anew, it would look a lot different from the current version. Comparing the Size of the Source Code An attempt was made to convert each of the original UI JavaFX Script source files (suffixed with .fx) over to a Java counterpart. Due to language feature differences, there are a small number of source files which only exist in one version or the other. The table below summarizes the size of each of the source files. JavaFX Script source file Number of Lines Number of Character JavaFX 2.0 Java source file Number of Lines Number of Characters ArrowKey.java 6 72 Board.fx 221 6831 Board.java 205 6508 BoardNode.fx 446 16054 BoardNode.java 723 29356 ChooseNumberNode.fx 168 5267 ChooseNumberNode.java 302 10235 CloseButtonNode.fx 115 3408 CloseButton.java 99 2883 ParentWithKeyTraversal.java 111 3276 FunctionPtr.java 6 80 Globals.java 20 554 Grouping.fx 8 140 HowToPlayNode.fx 121 3632 HowToPlayNode.java 136 4849 IconButtonNode.fx 196 5748 IconButtonNode.java 183 5865 Main.fx 98 3466 Main.java 64 2118 SliderNode.fx 288 10349 SliderNode.java 350 13048 Space.fx 78 1696 Space.java 106 2095 SpaceNode.fx 227 6703 SpaceNode.java 220 6861 TraversalHelper.fx 111 3095 Total 2,077 79,127 2531 87,800 A few notes about this table are in order: The number of lines in each file was determined by running the Unix ‘wc –l’ command over each file. The number of characters in each file was determined by running the Unix ‘ls –l’ command over each file. The examination of the code could certainly be much more rigorous. No standard formatting was performed on these files.  All comments however were deleted. There was a certain expectation that the new Java version would require more lines of code than the original JavaFX script version. As evidenced by a count of the total number of lines, the Java version has about 22% more lines than its FX Script counterpart. Furthermore, there was an additional expectation that the Java version would be more verbose in terms of the total number of characters.  In fact the preceding data shows that on average the Java source files contain fewer characters per line than the FX files.  But that's not the whole story.  Upon further examination, the FX Script source files had a disproportionate number of blank characters.  Why?  Because of the nature of how one develops JavaFX Script code.  The object literal dominates FX Script code.  Its not uncommon to see object literals indented halfway across the page, consuming lots of meaningless space characters. RAM consumption Not the most scientific analysis, memory usage for the application was examined on a Windows Vista system by running the Windows Task Manager and viewing how much memory was being consumed by the Sudoku version in question. Roughly speaking, the FX script version, after startup, had a RAM footprint of about 90MB and remained pretty much the same size. The Java version started out at about 55MB and maintained that size throughout its execution. What About Binding? Arguably, the most striking observation about the conversion from JavaFX Script to JavaFX 2.0 concerned the need for data synchronization, or lack thereof. In JavaFX Script, the primary means to synchronize data is via the bind expression (using the “bind” keyword), and perhaps to a lesser extent it’s “on replace” cousin. The bind keyword does not exist in Java, so for JavaFX 2.0 a Data Binding API has been introduced as a replacement. To give a feel for the difference between the two versions of the Sudoku program, the table that follows indicates how many binds were required for each source file. For JavaFX Script files, this was ascertained by simply counting the number of occurrences of the bind keyword. As can be seen, binding had been used frequently in the JavaFX Script version (and does not take into consideration an additional half dozen or so “on replace” triggers). The JavaFX 2.0 program achieves the same functionality as the original JavaFX Script version, yet the equivalent of binding was only needed twice throughout the Java version of the source code. JavaFX Script source file Number of Binds JavaFX Next Java source file Number of “Binds” ArrowKey.java 0 Board.fx 1 Board.java 0 BoardNode.fx 7 BoardNode.java 0 ChooseNumberNode.fx 11 ChooseNumberNode.java 0 CloseButtonNode.fx 6 CloseButton.java 0 CustomNodeWithKeyTraversal.java 0 FunctionPtr.java 0 Globals.java 0 Grouping.fx 0 HowToPlayNode.fx 7 HowToPlayNode.java 0 IconButtonNode.fx 9 IconButtonNode.java 0 Main.fx 1 Main.java 0 Main_Mobile.fx 1 SliderNode.fx 6 SliderNode.java 1 Space.fx 0 Space.java 0 SpaceNode.fx 9 SpaceNode.java 1 TraversalHelper.fx 0 Total 58 2 Conclusions As the JavaFX 2.0 technology is so new, and experience with the platform is the same, it is possible and indeed probable that some of the observations noted in the preceding article may not apply across other attempts at migrating applications. That being said, this first experience indicates that the migrated Java code will likely be larger, though not extensively so, than the original Java FX Script source. Furthermore, although very important, it appears that the requirements for data synchronization via binding, may be significantly less with the new platform.

    Read the article

  • Redistribution of sqlpackage.exe [SSDT]

    - by jamiet
    This is a short note for anyone that may be interested in redistributing sqlpackage.exe. If this isn’t you then no need to keep reading. Ostensibly this is here for anyone that bingles for this information. sqlpackage.exe is a command-line that ships with SQL Server Development Tools (SSDT) in SQL Server 2012 and its main purpose (amongst other things) is to deploy .dacpac files from the command-line. Its quite conceivable that one might want to install only sqlpackage.exe rather than the full SSDT suite (for example on a production server) and I myself have recently had that need. I enquired to the SSDT product team about the possibility of doing this. I said: Back in VS DB Proj days it was possible to use VSDBCMD.exe on a machine that did not have the full VS shell install by shipping lots of pre-requisites along for the ride (details at How to: Prepare a Database for Deployment From a Command Prompt by Using VSDBCMD.EXE). Is there a similar mechanism for using VSDBMCD.exe’s replacement, sqlpackage.exe? here was the reply from Barclay Hill who heads up the development team: Yes, SQLPackage.exe is the analogy of VSDBCMD.exe. You can acquire separately, in a stand-alone package, by installing DACFX. You can get it from: Feature pack is here: http://www.microsoft.com/en-us/download/details.aspx?id=29065 Web Platform Installer here: http://www.microsoft.com/web/gallery/install.aspx?appid=DACFX You will notice it has dependencies on SQLDOM and SQLCLRTYPES.  WebPI will install these for you, but it is al carte on the feature pack. So, now you know. I didn’t enquire about licensing of DACFX but given SSDT is free I am going to assume that the same applies to DACFX too. @Jamiet

    Read the article

  • SQL SERVER – Repair a SQL Server Database Using a Transaction Log Explorer

    - by Pinal Dave
    In this blog, I’ll show how to use ApexSQL Log, a SQL Server transaction log viewer. You can download it for free, install, and play along. But first, let’s describe some disaster recovery scenarios where it’s useful. About SQL Server disaster recovery Along with database development and administration, you must work on a good recovery plan. Disasters do happen and no one’s immune. What you can do is take all actions needed to be ready for a disaster and go through it with minimal data loss and downtime. Besides creating a recovery plan, it’s necessary to have a list of steps that will be executed when a disaster occurs and to test them before a disaster. This way, you’ll know that the plan is good and viable. Testing can also be used as training for all team members, so they can all understand and execute it when the time comes. It will show how much time is needed to have your servers fully functional again and how much data you can lose in a real-life situation. If these don’t meet recovery-time and recovery-point objectives, the plan needs to be improved. Keep in mind that all major changes in environment configuration, business strategy, and recovery objectives require a new recovery plan testing, as these changes most probably induce a recovery plan changing and tweaking. What is a good SQL Server disaster recovery plan? A good SQL Server disaster recovery strategy starts with planning SQL Server database backups. An efficient strategy is to create a full database backup periodically. Between two successive full database backups, you can create differential database backups. It is essential is to create transaction log backups regularly between full database backups. Keep in mind that transaction log backups can be created only on databases in the full recovery model. In other words, a simple, but efficient backup strategy would be a full database backup every night, a transaction log backup every hour, or every 15 minutes. The frequency depends on how much data you can afford to lose and how busy the database is. Another option, instead of creating a full database backup every night, is to create a full database backup once a week (e.g. on Friday at midnight) and differential database backup every night until next Friday when you will create a full database backup again. Once you create your SQL Server database backup strategy, schedule the backups. You can do that easily using SQL Server maintenance plans. Why are transaction logs important? Transaction log backups contain transactions executed on a SQL Server database. They provide enough information to undo and redo the transactions and roll back or forward the database to a point in time. In SQL Server disaster recovery situations, transaction logs enable to repair a SQL Server database and bring it to the state before the disaster. Be aware that even with regular backups, there will be some data missing. These are the transactions made between the last transaction log backup and the time of the disaster. In some situations, to repair your SQL Server database it’s not necessary to re-create the database from its last backup. The database might still be online and all you need to do is roll back several transactions, such as wrong update, insert, or delete. The restore to a point in time feature is available in SQL Server, but for large databases, it is very time-consuming, as SQL Server first restores a full database backup, and then restores transaction log backups, one after another, up to the recovery point. During that time, the database is unavailable. This is where a SQL Server transaction log viewer can help. For optimal recovery, besides having a database in the full recovery model, it’s important that you haven’t manually truncated the online transaction log. This ensures that all transactions made after the last transaction log backup are still in the online transaction log. All you have to do is read and replay them. How to read a SQL Server transaction log? SQL Server doesn’t provide an option to read transaction logs. There are several SQL Server commands and functions that read the content of a transaction log file (fn_dblog, fn_dump_dblog, and DBCC PAGE), but they are undocumented. They require T-SQL knowledge, return a large number of not easy to read and understand columns, sometimes in binary or hexadecimal format. Another challenge is reading UPDATE statements, as it’s necessary to match it to a value in the MDF file. When you finally read the transactions executed, you have to create a script for it. How to easily repair a SQL database? The easiest solution is to use a transaction log reader that will not only read the transactions in the transaction log files, but also automatically create scripts for the read transactions. In the following example, I will show how to use ApexSQL Log to repair a SQL database after a crash. If a database has crashed and both MDF and LDF files are lost, you have to rely on the full database backup and all subsequent transaction log backups. In another scenario, the MDF file is lost, but the LDF file is available. First, restore the last full database backup on SQL Server using SQL Server Management Studio. I’ll name it Restored_AW2014. Then, start ApexSQL Log It will automatically detect all local servers. If not, click the icon right to the Server drop-down list, or just type in the SQL Server instance name. Select the Windows or SQL Server authentication type and select the Restored_AW2014 database from the database drop-down list. When all options are set, click Next. ApexSQL Log will show the online transaction log file. Now, click Add and add all transaction log backups created after the full database backup I used to restore the database. In case you don’t have transaction log backups, but the LDF file hasn’t been lost during the SQL Server disaster, add it using Add.   To repair a SQL database to a point in time, ApexSQL Log needs to read and replay all the transactions in the transaction log backups (or the LDF file saved after the disaster). That’s why I selected the Whole transaction log option in the Filter setup. ApexSQL Log offers a range of various filters, which are useful when you need to read just specific transactions. You can filter transactions by the time of the transactions, operation type (e.g. to read only data inserts), table name, SQL Server login that made the transaction, etc. In this scenario, to repair a SQL database, I’ll check all filters and make sure that all transactions are included. In the Operations tab, select all schema operations (DDL). If you omit these, only the data changes will be read so if there were any schema changes, such as a new function created, or an existing table modified, they will be ignored and database will not be properly repaired. The data repair for modified tables will fail. In the Tables tab, I’ll make sure all tables are selected. I will uncheck the Show operations on dropped tables option, to reduce the number of transactions. Click Next. ApexSQL Log offers three options. Select Open results in grid, to get a user-friendly presentation of the transactions. As you can see, details are shown for every transaction, including the old and new values for updated columns, which are clearly highlighted. Now, select them all and then create a redo script by clicking the Create redo script icon in the menu.   For a large number of transactions and in a critical situation, when acting fast is a must, I recommend using the Export results to file option. It will save some time, as the transactions will be directly scripted into a redo file, without showing them in the grid first. Select Generate reconstruction (REDO) script , change the output path if you want, and click Finish. After the redo T-SQL script is created, ApexSQL Log shows the redo script summary: The third option will create a command line statement for a batch file that you can use to schedule execution, which is not really applicable when you repair a SQL database, but quite useful in daily auditing scenarios. To repair your SQL database, all you have to do is execute the generated redo script using an integrated developer environment tool such as SQL Server Management Studio or any other, against the restored database. You can find more information about how to read SQL Server transaction logs and repair a SQL database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered, restored, or transactions rolled back. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Configure clean URLs using Laravel using a rewrite rule to index.php

    - by yannis hristofakis
    Recently I've started learning Laravel , I have none experience with framework before. I'm encountering the following problem .I'm trying to configure the .htaccess file so I can have clean URLs but the only thing I get are 404 Not Found error pages. I have created a virtual host - you can see below the configuration file - and changed the .htaccesss file on the public directory. /etc/apache2/sites-available <VirtualHost *:80> ServerAdmin [email protected] ServerName laravel.lar DocumentRoot "/home/giannis/Desktop/laravel/public" <Directory "/home/giannis/Desktop/laravel/public"> Options Indexes FollowSymLinks MultiViews AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> .htaccesss file: laravel/public # Apache configuration file # http://httpd.apache.org/docs/current/mod/quickreference.html # Note: ".htaccess" files are an overhead for each request. This logic should # be placed in your Apache config whenever possible. # http://httpd.apache.org/docs/current/howto/htaccess.html # Turning on the rewrite engine is necessary for the following rules and # features. "+FollowSymLinks" must be enabled for this to work symbolically. <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine On </IfModule> # For all files not found in the file system, reroute the request to the # "index.php" front controller, keeping the query string intact <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] </IfModule> In order to test it, I have created a view named about and made the proper routing. If I link to http://laravel.lar/index.php/about/ I'm routing to the about page instead if I link to http://laravel.lar/about/ I get a 404 Not Found error. I'm using a Debian based system.

    Read the article

  • Icinga error "Icinga Startup Delay does not exist" although it does

    - by aaron
    I just installed icinga to monitor my server following this guide: http://docs.icinga.org/0.8.1/en/wb_quickstart-idoutils.html Everything built and installed correctly, but icinga is reporting a critical error with the reason: "The command defined for service Icinga Startup Delay does not exist" However, I can see that ${ICINGA_BASE}/etc/objects/localhost.cfg contains: define service{ use local-service ; Name of service template to use host_name localhost service_description Icinga Startup Delay check_command check_icinga_startup_delay notifications_enabled 0 } and ${ICINGA_BASE}/etc/objects/commands.cfg contains: define command { command_name check_icinga_startup_delay command_line $USER1$/check_dummy 0 "Icinga started with $$(($EVENTSTARTTIME$-$PROCESSSTARTTIME$)) seconds delay | delay=$$(($EVENTSTARTTIME$-$PROCESSSTARTTIME$))" } both of these files had not been modified since the whole make/install process. I am running on Ubuntu 10.04, most recent build of icinga-core, and apache2 2.2.14 What must I do to tell Icinga that the command exists? Or is the problem that check_dummy does not exist? Where or how would I define that?

    Read the article

  • A Rose by Any Other Name..

    - by Geoff N. Hiten
    It is always a good start when you can steal a title line from one of the best writers in the English language.  Let’s hope I can make the rest of this post live up to the opening.  One recurring problem with SQL server is moving databases to new servers.  Client applications use a variety of ways to resolve SQL Server names, some of which are not changed easily <cough SharePoint /cough>.  If you happen to be using default instances on both the source and target SQL Server, then the solution is pretty simple.  You create (or bug the network admin until she creates) two DNS “A” records. One points the old name to the new IP address.  The other creates a new alias for the old server, since the original system name is now redirected.  Note this will redirect ALL traffic from the old server to the new server, including RDP and file share connection attempts.    Figure 1 – Microsoft DNS MMC Snap-In   Figure 2 – DNS New Host Dialog Box Both records are necessary so you can still access the old server via an alternate name. Server Role IP Address Name Alias Source 10.97.230.60 SQL01 SQL01_Old Target 10.97.230.80 SQL02 SQL01 Table 1 – Alias List If you or somebody set up connections via IP address, you deserve to have to go to each app and fix it by hand.  That is the only way to fix that particular foul-up. If have to deal with Named Instances either as a source or a target, then it gets more complicated.  The standard fix is to use the SQL Server Configuration Manager (or one of its earlier incarnations) to create a SQL client alias to redirect the connection.  This can be a pain installing and configuring the app on multiple client servers.  The good news is that SQL Server Configuration Manager AND all of its earlier versions simply write a few registry keys.  Extracting the keys into a .reg file makes centralized automated deployment a snap. If the client is a 32-bit system, you have to extract the native key.  If it is a 64-bit, you have to extract the native key and the WoW (32 bit on 64 bit host) key. First, pick a development system to create the actual registry key.  If you do this repeatedly, you can simply edit an existing registry file.  Create the entry using the SQL Configuration Manager.  You must use a 64-bit system to create the WoW key.  The following example redirects from a named instance “SQL01\SQLUtiluty” to a default instance on “SQL02”.   Figure 3 – SQL Server Configuration Manager - Native Figure 3 shows the native key listing. Figure 4 – SQL Server Configuration Manager – WoW If you think you don’t need the WoW key because your app is 64 it, think again.  SQL Server Management Server is a 32-bit app, as are most SQL test utilities.  Always create both keys for 64-bit target systems. Now that the keys exist, we can extract them into a .reg file. Fire up REGEDIT and browse to the following location:  HKLM\Software\Microsoft\MSSQLServer\Client\ConnectTo.  You can also search the registry for the string value of one of the server names (old or new). Right click on the “ConnectTo” label and choose “Export”.  Save with an appropriate name and location.  The resulting file should look something like this: Figure 5 – SQL01_Alias.reg Repeat the process with the location: HKLM\Software\Wow6432Node\Microsoft\MSSQLServer\Client\ConnectTo Note that if you have multiple alias entries, ALL of the entries will be exported.  In that case, you can edit the file and remove the extra aliases. You can edit the files together into a single file.  Just leave a blank line between new keys like this: Figure 6 – SQL01_Alias_All.reg Of course if you have an automatic way to deploy, it makes sense to have an automatic way to Un-deploy.  To delete a registry key, simply edit the .reg file and replace the target with a “-“ sign like so. Figure 7 – SQL01_Alias_UNDO.reg Now we have the ability to move any database to any server without having to install or change any applications on any client server.  The whole process should be transparent to the applications, which makes planning and coordinating database moves a far simpler task.

    Read the article

  • How to Find Your IP Address in Ubuntu Linux

    - by Trevor Bekolay
    In Windows, we use the command-line program ipconfig to find out our IP address. How do you find it in Ubuntu? We will show you two locations easily accessible through the GUI and, of course, a terminal command that will get your IP address in no time. The first location, and the easiest in most cases, is found by right clicking the network icon in the notification area and clicking Connection Information. This brings up a window which has a bunch of information, including your IP address. The second location, which shows you more detail than this first method, is at System > Administration > Network Tools. Select the right network device, and you’ve got a ton of information at your fingertips. Finally, if you can’t tear yourself away from a terminal window, the command to type in is: ifconfig Yes, it’s only one character different than ipconfig. Who would have guessed? As it turns out, you’re always a few clicks or keystrokes away from finding your IP address in Ubuntu. Isn’t choice great? Similar Articles Productive Geek Tips Change Ubuntu Desktop from DHCP to a Static IP AddressAdding extra Repositories on UbuntuClear the Auto-Complete Email Address Cache in OutlookMake Firefox Display Large Images Full SizeChange Ubuntu Server from DHCP to a Static IP Address TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics

    Read the article

  • How to Restore the Real Internet Explorer Desktop Icon in Windows 7

    - by The Geek
    Remember how previous versions of Windows had an Internet Explorer icon on the desktop, and you could right-click it to quickly access the Internet Options screen? It’s completely gone in Windows 7, but a geeky hack can bring it back. Microsoft removed this feature to comply with all those murky legal battles they’ve had, and their alternate suggestion is to create a standard shortcut to iexplore.exe on the Desktop, but it’s not the same thing. We’ve got a registry hack to bring it back. This guest article was written by Ramesh from the WinHelpOnline blog, where he’s got loads of really geeky registry hacks. Bring Back the Internet Explorer Namespace Icon in Windows 7 the Easy Way If you just want the IE icon back, all you need to do is download the RealInternetExplorerIcon.zip file, extract the contents, and then double-click on the w7_ie_icon_restore.reg file. That’s all you have to do. There’s also an undo registry file there if you want to get rid of it. Download the Real Internet Explorer Icon Registry Hack Manual Registry Hack If you prefer doing things the manual way, or just really want to understand how this hack works, you can follow through the manual steps below to learn how it was done, but we’ll have to warn you that it’s a lot of steps. Launch Regedit.exe using the Start Menu search box, and then navigate to the following location: HKEY_CLASSES_ROOT \ CLSID \ {871C5380-42A0-1069-A2EA-08002B30309D} Right-click on the key on the left-hand pane, choose Export, and save it to a .REG file (say, ie-guid.reg) Open up the REG file using Notepad… From the Edit menu, click Replace, and replace every occurrence of the following GUID string {871C5380-42A0-1069-A2EA-08002B30309D} … with a custom GUID string, such as: {871C5380-42A0-1069-A2EA-08002B30301D} Save the REG file and close Notepad, and then double-click on the file to merge the contents to the registry. Either re-open the registry editor, or use the F5 key to reload everything with the new changes (this step is important). Now you can navigate downto the following registry key: HKEY_CLASSES_ROOT \ CLSID \ {871C5380-42A0-1069-A2EA-08002B30301D} \ Shellex \ ContextMenuHandlers \ ieframe Double-click on the (default) key in the right-hand pane and set its data as: {871C5380-42A0-1069-A2EA-08002B30309D} With this done, press F5 on the desktop and you’ll see the Internet Explorer icon that looks like this: The icon appears incomplete without the Properties command in right click menu, so keep reading. Final Registry Hack Adjustments Click on the following key, which should still be viewable in your Registry editor window from the last step. HKEY_CLASSES_ROOT\CLSID\{871C5380-42A0-1069-A2EA-08002B30301D} Double-click LocalizedString in the right-hand pane and type the following data to rename the icon. Internet Explorer Select the following key: HKEY_CLASSES_ROOT\CLSID\{871C5380-42A0-1069-A2EA-08002B30301D}\shell Add a subkey and name it as Properties, then select the Properties key, double-click the (default) value and type the following: P&roperties Create a String value named Position, and type the following data bottom At this point the window should look something like this: Under Properties, create a subkey and name it as Command, and then set its (default) value as follows: control.exe inetcpl.cpl Navigate down to the following key, and then delete the value named LegacyDisable HKEY_CLASSES_ROOT \ CLSID \ {871C5380-42A0-1069-A2EA-08002B30301D} \ shell \ OpenHomePage Now head to the this key: HKEY_LOCAL_MACHINE \ SOFTWARE \ Microsoft \ Windows \ CurrentVersion \ Explorer \ Desktop \ NameSpace Create a subkey named {871C5380-42A0-1069-A2EA-08002B30301D} (which is the custom GUID that we used earlier in this article.) Press F5 to refresh the Desktop, and here is how the Internet Explorer icon would look like, finally. That’s it! It only took 24 steps, but you made it through to the end—of course, you could just download the registry hack and get the icon back with a double-click. Similar Articles Productive Geek Tips Quick Help: Restore Show Desktop Icon in Windows VistaQuick Help: Restore Flip3D Icon in Windows VistaAdd Internet Explorer Icon to Windows XP / Vista DesktopHide, Delete, or Destroy the Recycle Bin Icon in Windows 7 or VistaBuilt-in Quick Launch Hotkeys in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go

    Read the article

  • How to install Revolution R Enterprise?

    - by Abe
    Revolution R Enterprise is available as a red-hat rpm file. Normally I would use alien to install an rpm file, but the instructions for installing this package have an install.py file that I am supposed to execute. When I ./install.py, I get the following instructions: rpm: please use alien to install rpm packages on Debian, if you are really sure use --force-debian switch. See README.Debian for more details. There is no README.Debian file in the directory, and although I am not proficient in python, I can tell that there are at least four different directories with *rpm files in them. Has anyone had success with this? If possible, I'd prefer to install the Enterprise version instead of community version in the Ubuntu repository so that I can test it out.

    Read the article

  • Re-running SSRS subscription jobs that have failed

    - by Rob Farley
    Sometimes, an SSRS subscription for some reason. It can be annoying, particularly as the appropriate response can be hard to see immediately. There may be a long list of jobs that failed one morning if a Mail Server is down, and trying to work out a way of running each one again can be painful. It’s almost an argument for using shared schedules a lot, but the problem with this is that there are bound to be other things on that shared schedule that you wouldn’t want to be re-run. Luckily, there’s a table in the ReportServer database called dbo.Subscriptions, which is where LastStatus of the Subscription is stored. Having found the subscriptions that you’re interested in, finding the SQL Agent Jobs that correspond to them can be frustrating. Luckily, the jobstep command contains the subscriptionid, so it’s possible to look them up based on that. And of course, once the jobs have been found, they can be executed easily enough. In this example, I produce a list of the commands to run the jobs. I can copy the results out and execute them. select 'exec sp_start_job @job_name = ''' + cast(j.name as varchar(40)) + '''' from msdb.dbo.sysjobs j  join  msdb.dbo.sysjobsteps js on js.job_id = j.job_id join  [ReportServer].[dbo].[Subscriptions] s  on js.command like '%' + cast(s.subscriptionid as varchar(40)) + '%' where s.LastStatus like 'Failure sending mail%'; Another option could be to return the job step commands directly (js.command in this query), but my preference is to run the job that contains the step. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • F# WPF Form &ndash; the basics

    - by MarkPearl
    I was listening to Dot Net Rocks show #560 about F# and during the podcast Richard Campbell brought up a good point with regards to F# and a GUI. In essence what I understood his point to be was that until one could write an end to end application in F#, it would be a hard sell to developers to take it on. In part I agree with him, while I am beginning to really enjoy learning F#, I can’t but help feel that I would be a lot further into the language if I could do my Windows Forms like I do in C# or VB.NET for the simple reason that in “playing” applications I spend the majority of the time in the UI layer… So I have been keeping my eye out for some examples of creating a WPF form in a F# project and came across Tim’s F# Twitter Stream Sample, which had exactly this…. of course he actually had a bit more than a basic form… but it was enough for me to scrap the insides and glean what I needed. So today I am going to make just the very basic WPF form with all the goodness of a XAML window. Getting Started First thing we need to do is create a new solution with a blank F# application project – I have made mine called FSharpWPF. Once you have the project created you will need to change the project type from a Console Application to a Windows Application. You do this by right clicking on the project file and going to its properties… Once that is done you will need to add the appropriate references. You do this by right clicking on the References in the Solution Explorer and clicking “Add Reference'”. You should add the appropriate .Net references below for WPF & XAMl to work. Once these references are added you then need to add your XAML file to the project. You can do this by adding a new item to the project of type xml and simply changing the file extension from xml to xaml. Once the xaml file has been added to the project you will need to add valid window XAML. Example of a very basic xaml file is shown below… <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="F# WPF WPF Form" Height="350" Width="525"> <Grid> </Grid> </Window> Once your xaml file is done… you need to set the build action of the xaml file from “None” to “Resource” as depicted in the picture below. If you do not set this you will get an IOException error when running the completed project with a message along the lines of “Cannot locate resource ‘window.xaml’ You then need to tie everything up by putting the correct F# code in the Program.fs to load the xaml window. In the Program.fs put the following code… module Program open System open System.Collections.ObjectModel open System.IO open System.Windows open System.Windows.Controls open System.Windows.Markup [<STAThread>] [<EntryPoint>] let main(_) = let w = Application.LoadComponent(new System.Uri("/FSharpWPF;component/Window.xaml", System.UriKind.Relative)) :?> Window (new Application()).Run(w) Once all this is done you should be able to build and run your project. What you have done is created a WPF based window inside a FSharp project. It should look something like below…   Nothing to exciting, but sufficient to illustrate the very basic WPF form in F#. Hopefully in future posts I will build on this to expose button events etc.

    Read the article

  • make-like build tools for data?

    - by miku
    Make is a standard tools for building software. But make decides whether a target needs to be regenerated by comparing file modification times. Are there any proven, preferably small tools that handle builds not for software but for data? Something that regenerates targets not only on mod times but on certain other properties (e.g. completeness). (Or alternatively some paper that describes such a tool.) As illustration: I'd like to automate the following process: get data (e.g. a tarball) from some regularly updated source copy somewhere if it's not there (based e.g. on some filename-scheme) convert the files to different format (but only if there aren't successfully converted ones there - e.g. from a previous attempt - custom comparison routine) for each file find a certain data element and fetch some additional file from say an URL, but only if that hasn't been downloaded yet (decide on existence of file and file "freshness") finally compute something (e.g. word count for something identifiable and store it in the database, but only if the DB does not have an entry for that exact ID yet) Observations: there are different stages each stage is usually simple to compute or implement in isolation each stage may be simple, but the data volume may be large each stage may produce a few errors each stage may have different signals, on when (re)processing is needed Requirements: builds should be interruptable and idempotent (== robust) when interrupted, already processed objects should be reused to speedup the next run data paths should be easy to adjust (simple syntax, nothing new to learn, internal dsl would be ok) some form of dependency graph, that describes the process would be nice for later visualizations should leverage existing programs, if possible I've done some research on make alternatives like rake and have worked a lot with ant and maven in the past. All these tools naturally focus on code and software build, not on data builds. A system we have in place now for a task similar to the above is pretty much just shell scripts, which are compact (and are a ok glue for a variety of other programs written in other languages), so I wonder if worse is better?

    Read the article

  • Setting a custom timeout to nmblookup

    - by C2H5OH
    As part of a batch script, I have the following command: hostname=$(nmblookup -A $ip_address | awk '$2 == "<20>" {print $1}') Which works fine from a functinality perspective, even for unresolved hosts. The problem is that when the IP address is not reachable or the remote machine does not respond to the SMB request, the command takes about ten seconds to complete. Therefore, the question is simple: is there a way to lower the elapsed time in such cases? Or, in other words, is there a way to set a custom timeout for the nmblookup command? NOTE: I'm interested in solutions that do not make use of SIGALRM or similar mechanisms; if they exist. The nmblookup version is 3.6.3 from Ubuntu 12.04 LTS.

    Read the article

  • Automated “ubuntu-12.04.1-server-amd64” OS installation on physical machine

    - by user285336
    We are using Physical server and are in process of Automated “ubuntu-12.04.1-server-amd64” OS installation on it. There are two HDD for OS installation purpose and there are RAID1 relation between them. This setup has been done through BIOS. The kickstart configuration file looks like this: #Generated by Kickstart Configurator #platform=AMD64 or Intel EM64T #System language lang en_US #Language modules to install langsupport en_US #System keyboard keyboard us #System mouse mouse #System timezone timezone Asia/Dili #Root password rootpw --iscrypted $1$Yl1QJyta$KzIT.kq3i9E5XaiQKcUJn/ #Initial user user ankit --fullname "Ankit" --iscrypted --password $1$c6Yflpea$pi1QQ59/jgywmGwBv25z3/ #Reboot after installation reboot #Use text mode install text #Install OS instead of upgrade install #Use Web installation url --url my_repo_location #System bootloader configuration bootloader --location=mbr #Clear the Master Boot Record zerombr yes #Partition clearing information clearpart --all --initlabel #Disk partitioning information part /boot --fstype ext4 --size 100 --ondisk sda part / --fstype ext4 --size 10000 --ondisk sda part /var --fstype ext4 --size 10000 --ondisk sda part swap --size 1024 --ondisk sdb #System authorization infomation auth --useshadow --enablemd5 #Network information network --bootproto=dhcp --device=eth0 #Firewall configuration firewall --enabled --trust=eth0 --http --ftp --ssh --telnet --smtp #X Window System configuration information xconfig --depth=8 --resolution=640x480 --defaultdesktop=GNOME But I am getting the below error : No root file system is defined Please suggest on this. Do we need to do any modification in kickstart configuration file. Any help in this regard will be very helpful for us. The automated Ubuntu OS installation is successful in Virtual Machine(VM) with the above ks.cfg (kickstart configuration file ) but failing in case of physical machine. Please suggest on this and if possible provide the new ks.cfg file to resolve above problem. Thanks & Regards, Rajesh Prasad

    Read the article

  • ubuntu-12.04-wubi-i386.tar.xz for the wubi installer

    - by Alejandro
    I ran the wubi installer (from an extracted ubuntu ISO) and it downloads ubuntu-12.04-wubi-i386.tar.xz but it's slow and non-resumable so I cancelled it, found a mirror of the file online and downloaded it using Intenet download manager. Where should I be placing ubuntu-12.04-wubi-i386.tar.xz file so the wubi installer won't have to download that file anymore? Thank you. Update: I extracted the archive and there are two files, I'm afraid I do not know where to place them.

    Read the article

  • Cannot open software sources after removing a PPA

    - by rkwbcca
    After deleting a ppa entry in the sources.list file, I was not able to open the software sources application. Opening the software centre is fine. I tried running gksudo software-properties-gtk and got the follwong message: SoftwareProperties.__init__(self, options=options, datadir=datadir) File "/usr/lib/python2.7/dist-packages/softwareproperties/SoftwareProperties.py", line 96, in __init__ self.reload_sourceslist() File "/usr/lib/python2.7/dist-packages/softwareproperties/SoftwareProperties.py", line 580, in reload_sourceslist self.distro.get_sources(self.sourceslist) File "/usr/lib/python2.7/dist-packages/aptsources/distro.py", line 91, in get_sources raise NoDistroTemplateException("Error: could not find a " aptsources.distro.NoDistroTemplateException: Error: could not find a distribution template Would appreciate if you can let me know how to solve this problem.

    Read the article

  • Problem in installing OpenOffice 3.1 on Solaris 10

    - by Sunil Kumar Sahoo
    I want OpenOffice in Solaris. So I downloaded OpenOffice from the link below. http://download.openoffice.org/other.html#tested-full My OpenOffice is in .tar.gz format so I unzipped the file using gunzip and then untar'ed the file using tar xvf command. Now I got a directory containing packages subfolder. When I cd to that directory I found too many subdirectories. I could not find a single .pkg file or .jar file or .sh file so that I can install the OpenOffice in Solaris 10. How can I install OpenOffice in Solaris 10 given the scenario above?

    Read the article

  • Add Properties Back to the Context Menu in Firefox

    - by Asian Angel
    Have you noticed that the Properties Command has been removed from the Context Menu in Firefox 3.6? If you have been missing it here is how to get it back. Before With the newest version of Firefox you may have noticed a very useful command missing from the “Context Menu”. Here you can see that when we right clicked on the article link we were unable to “access” the properties for it… Same article and the same problem when trying to “access” the properties for one of the images. After Once you have installed the extension you can once again “access” the properties for those links… And those images… Looking very good… Conclusion If you have been frustrated with the removal of the “Properties Command” from the “Context Menu” in Firefox 3.6, you can now add it back in just a few moments. Links Download the Element Properties extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Prevent Annoying Websites From Messing With the Right-Click Menu in FirefoxAccess Your Bookmarks in the Context Menu with Context BookmarksAdd Print & Print Preview Commands to Firefox’s Context MenuRestore the "Search…" Item to the Folder Context Menu in Windows Vista SP1Create Permanent Tabs in Firefox with PermaTabs Mod TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher Automate Tasks in Linux with Crontab Discover New Bundled Feeds in Google Reader Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam

    Read the article

  • Filling in PDF Forms with ASP.NET and iTextSharp

    The Portable Document Format (PDF) is a popular file format for documents. PDF files are a popular document format for two primary reasons: first, because the PDF standard is an open standard, there are many vendors that provide PDF readers across virtually all operating systems, and many proprietary programs, such as Microsoft Word, include a "Save as PDF" option. Consequently, PDFs server as a sort of common currency of exchange. A person writing a document using Microsoft Word for Windows can save the document as a PDF, which can then be read by others whether or not they are using Windows and whether or not they have Microsoft Word installed. Second, PDF files are self-contained. Each PDF file includes its complete text, fonts, images, input fields, and other content. This means that even complicated documents with many images, an intricate layout, and with user interface elements like textboxes and checkboxes can be encapsulated in a single PDF file. Due to their ubiquity and layout capabilities, it's not uncommon for a websites to use PDF technology. For example, when purchasing goods at an online store you may be offered the ability to download an invoice as a PDF file. PDFs also support form fields, which are user interface elements like textboxes, checkboxes, comboboxes, and the like. These form fields can be entered by a user viewing the PDF or, with a bit of code, they can be entered programmatically. This article is the first in a multi-part series that examines how to programmatically work with PDF files from an ASP.NET application using iTextSharp, a .NET open source library for PDF generation. This installment shows how to use iTextSharp to open an existing PDF document with form fields, fill those form fields with user-supplied values, and then save the combined output to a new PDF file. Read on to learn more! Read More >

    Read the article

  • New event log nowhere to be found after creating in PowerShell

    - by Mega Matt
    Through PowerShell, I am attempting to create a new event log and write a test entry to it, but it is not showing up the Event Viewer. This is the command I'm using to create a new event log: new-eventlog -logname TestLog -source TestLog And to write a new event to it: write-eventlog TestLog -source TestLog -eventid 12345 -message "Test message" After running the first command, there is no "TestLog" log in the event viewer anywhere, and I would expect it to show up in the Applications and Services Logs section. After running the second command, same result. However, I am seeing a registry key for the log at HKLM\SYSTEM\services\eventlog\TestLog. Just not seeing anything in the event viewer. So, 2 questions: When should I be seeing the event log? After it gets created or after I write the first event to it? And, more importantly, why am I not seeing it at all? I'm using Windows Server 2008R2, and am logged in and running the PS as an administrator. Thanks.

    Read the article

  • Whats consuming HDD Space

    - by Umair Mustafa
    I have single partition of 92GB in which I installed Ubuntu 12.04. And for some Unknown reason a message pop ups saying that I only have 1GB of HDD space left. I ran command sudo du -hscx * on / and /home /home gave me this result 4.0K C:\nppdf32Log\debuglog.txt 0 convertedvideo.avi 176M Desktop 16K Documents 169M Downloads 4.0K examples.desktop 17M file.txt 4.0K Music 984K Pictures 4.0K Public 320K Red Hat 6.iso 2.5M syslog-ng_3.3.6.tar.gz 4.0K Templates 8.0K terminal.png 1.2M Thunderbird Attachments 698M ubuntu10.04LTS.iso 16K Ubuntu One 4.0K Untitled Folder 4.0K Videos 21G VirtualBox VMs 22G total And / gave me this result 81G home 0 initrd.img 0 initrd.img.old 833M lib 16K lost+found 68K media 4.0K mnt 260M opt du: cannot access `proc/8339/task/8339/fd/4': No such file or directory du: cannot access `proc/8339/task/8339/fdinfo/4': No such file or directory du: cannot access `proc/8339/fd/4': No such file or directory du: cannot access `proc/8339/fdinfo/4': No such file or directory 0 proc 640K root 908K run 8.6M sbin 4.0K selinux 4.0K srv 0 sys 148K tmp 3.3G usr 436M var 0 vmlinuz 0 vmlinuz.old 86G total If you look at the result returned by / it shows that /home is consuming 81GB but on the other hand /home returns only 22GB. I cant figure out whats consuming the HDD. I have not installed anything except Virtual Machines Perpetrator found using Disk Usage Analyzer

    Read the article

  • Convertion of tiff image in Python script - OCR using tesseract

    - by PYTHON TEAM
    I want to convert a tiff image file to text document. My code perfectly as I expected to convert tiff images with usual font but its not working for french script font . My tiff image file contains text. The font of text is in french script format.I here is my code import Image import subprocess import util import errors tesseract_exe_name = 'tesseract' # Name of executable to be called at command line scratch_image_name = "temp.bmp" # This file must be .bmp or other Tesseract-compatible format scratch_text_name_root = "temp" # Leave out the .txt extension cleanup_scratch_flag = True # Temporary files cleaned up after OCR operation def call_tesseract(input_filename, output_filename): """Calls external tesseract.exe on input file (restrictions on types), outputting output_filename+'txt'""" args = [tesseract_exe_name, input_filename, output_filename] proc = subprocess.Popen(args) retcode = proc.wait() if retcode!=0: errors.check_for_errors() def image_to_string(im, cleanup = cleanup_scratch_flag): """Converts im to file, applies tesseract, and fetches resulting text. If cleanup=True, delete scratch files after operation.""" try: util.image_to_scratch(im, scratch_image_name) call_tesseract(scratch_image_name, scratch_text_name_root) text = util.retrieve_text(scratch_text_name_root) finally: if cleanup: util.perform_cleanup(scratch_image_name, scratch_text_name_root) return text def image_file_to_string(filename, cleanup = cleanup_scratch_flag, graceful_errors=True): If cleanup=True, delete scratch files after operation.""" try: try: call_tesseract(filename, scratch_text_name_root) text = util.retrieve_text(scratch_text_name_root) except errors.Tesser_General_Exception: if graceful_errors: im = Image.open(filename) text = image_to_string(im, cleanup) else: raise finally: if cleanup: util.perform_cleanup(scratch_image_name, scratch_text_name_root) return text if __name__=='__main__': im = Image.open("/home/oomsys/phototest.tif") text = image_to_string(im) print text try: text = image_file_to_string('fnord.tif', graceful_errors=False) except errors.Tesser_General_Exception, value: print "fnord.tif is incompatible filetype. Try graceful_errors=True" print value text = image_file_to_string('fnord.tif', graceful_errors=True) print "fnord.tif contents:", text text = image_file_to_string('fonts_test.png', graceful_errors=True) print text

    Read the article

  • Remove Programs from the Open With Menu in Explorer

    - by Matthew Guay
    Would you like to clean up the Open with menu in Windows Explorer?  Here’s how you can remove program entries you don’t want in this menu on any version of Windows. Have you ever accidently opened an mp3 with Notepad, or a zip file with Word?  If so, you’re also likely irritated that these programs now show up in the Open with menu in Windows Explorer every time you select one of those files.  Whenever you open a file type with a particular program, Windows will add an entry for it to the Open with menu.  Usually this is helpful, but it can also clutter up the menu with wrong entries. On our computer, we have tried to open a PDF file with Word and Notepad, neither which can actually view the PDF itself.  Let’s remove these entries.  To do this, we need to remove the registry entries for these programs.  Enter regedit in your Start menu search or in the Run command to open the Registry editor. Backup your registry first just in case, so you can roll-back any changes you make if you accidently delete the wrong value.  Now, browse to the following key: HKEY_CURRENT_USER \Software \Microsoft \Windows \CurrentVersion \ Explorer \FileExts\ Here you’ll see a list of all the file extensions that are registered on your computer. Browse to the file extension you wish to edit, click the white triangle beside it to see the subfolders, and select OpenWithList.  In our test, we want to change the programs associated with PDF files, so we select the OpenWithList folder under .pdf. Notice the names of the programs under the Data column on the right.  Right-click the value for the program you don’t want to see in the Open With menu and select Delete. Click Yes at the prompt to confirm that you want to delete this value. Repeat these steps with all the programs you want to remove from this file type’s Open with menu.  You can go ahead and remove entries from other file types as well if you wish. Once you’ve removed the entries you didn’t want to see, check out the Open with menu in Explorer again.  Now it will be much more streamlined and will only show the programs you want to see. Conclusion This simple trick can help you keep your Open with menu tidy, and only show the programs you want in the list.  It can be irritating to accidently open files in programs that can’t even read them.  This trick works in all versions of Windows, including 2000, XP, Vista, and Windows 7. Similar Articles Productive Geek Tips Remove ISP Text or Corporate Branding from Internet Explorer Title BarRemove the Username From the Start Menu in XPKeep Start Menu From Closing After Opening ApplicationsRemove PartyPoker (Or Other Items) from the Internet Explorer Tools MenuUninstall, Disable, or Delete Internet Explorer 8 from Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app

    Read the article

  • How to Import Data Taxonomy Into Managed Metada Service in SharePoint 2010

    - by Wayne
    First, Open the Term Store Management Tool (Site Actions > Site Settings > Term Store Management) an download the sample import file. (Remember, Service Applications are configured on a per Web Application basis, so use any site collection inside a WebApp configured with your MMS.) Second, Insert your data. In the photo below, I demonstrate creating a term called USA. Under that, I create the term Alabama. Under that, 4 cities. Then under USA, a term called Alaska. The point is that the we have a hierarchy. Using the import file, we can go 7 layers deep. The last steps is Save the file, head back into the Term Store Management Tool, select/create a group, and Import the file.

    Read the article

< Previous Page | 857 858 859 860 861 862 863 864 865 866 867 868  | Next Page >