Search Results

Search found 5195 results on 208 pages for 'shift reduce conflict'.

Page 5/208 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • High Usage of RAM by wxPython's GUI and need some advice to reduce it

    - by user67024
    I've recently developed a GUI in wxPython for windows platform. It contains a five tabs, 4 of them are just richTextCtrl boxes and the other one has controls for uploading files, buttons, textctrls, a slider etc.. As I was new to GUI development in Python, I used wxFormBuilder to generate some of the code using a good amount of sizers. So, now the problem is that the GUI starts off with a initial memory of around 40MB which is too much for such a simple application (Or so I think) . Also, when the functions handling the functions use huge lists as the program is for debugging large data logs and identifying the problems in'em implying that I can't afford memory for GUI. So, how can I reduce that start working memory size? Is it a general issue in wxPython? And currently trying use profilers but not sure if it's going to help.

    Read the article

  • Conflict between Change Control and ASL Mapping

    - by Jie Chen
    Yesterday I got one strange report that on Agile 9.3.1.2, adding a Supplier into Item's Supplier tab will always remove all the data from Item.PageTwo.MultiList01 field which is assigned to a User Group list. The detailed problem description is like below. In JavaClient, MultiList01 attribute on Parts class's PageTwo tab is enabled and assigned with User Group list. On WebClient, user created a new Part and assign MultiList01 with two UserGroups: "Global User Group Test1" and "Personal Group_Test1". Then go to Suppliers tab to add three Suppliers. Switch back to Part's TitleBlock, will see MultiList01 loses the User Group data. To confirm if MultiList01 really loses the data or it saves with other wrong data, I need to check the database and find strange data that MultiList01 saves wrong data ",7976911,7976907,7976959,", which are exactly the ID of these three Suppliers. Then I can suspect the Supplier attribute on Suppliers tab must be mapped to MultiList01. However when I check Supplier in JavaClient, the "ASL mapped to" is blank. More interesting thing is the database clearly shows Supplier attribute (Base ID =2000004219) is mapped to 2090, which is PageTwo.MultiList01 Base ID. Till now, we can get a conclusion that Supplier data is really mapped to MultiList01, though we assign MultiList01 to User Group list and Supplier does not set "ASL mapped to". It must be another function which overrides "ASL mapped to" visibility in JavaClient with high priority. That is the "Change Controlled" function. We immediately see "ASL mapped to" with value "MultiList01" when we disable Change Controlled for Multilist01 If one attribute is Change Controlled, Supplier data cannot be mapped to this attribute theoretically because Supplier could be dynamically modified by users, not by Changes. In real situation of Agile 9.3.1.2, it could be a Code Defect. We can imagine the scenario customer met. He setup Parts.PageTwo.Multilist01 assigned with Supplier list, then in Parts.Suppliers.Supplier attribute, he set "ASL mapped to" to "Multilist01". Later company business is changed, so he set Multilist01 with Change Controlled and re-assign with User Group list. He forgot to remove "ASL mapped to" before he did modifications to Multilist01. Finally we know the solution, it depends on real business. If still need to mapping Supplier to Parts.PageTwo attribute, should modify "ASL mapped to" to other one attribute which already has assigned with Suppliers list. If do not need "ASL mapped to" function, should delete the data from database level. We cannot do it from JavaClient UI. delete propertytable where id in (select p.id from propertytable p, nodetable n where p.parentid =n.id and n.inherit=2000004219 and propertyid=794)

    Read the article

  • conflict with php zend libs and c++ libs(ctime) to build extension for php [migrated]

    - by user69800
    Im going to build an extension for PHP 5.3.x everything is OK when I build without vc++ lib. error C2039: 'clock_t' : is not a member of '`global namespace'' error C2873: 'clock_t' : symbol cannot be used in a using-declaration error C2039: 'asctime' : is not a member of '`global namespace'' error C2873: 'asctime' : symbol cannot be used in a using-declaration and... I include just and got this error. I know this problem is from my include header file in properties that required from PHP zend engine, But I do not know how solve this problem. thanks

    Read the article

  • Reduce HR Costs & Improve Productivity with Workforce Communications

    - by Robert Story
    Upcoming WebcastTitle: Reduce HR Costs & Improve Productivity with Workforce CommunicationsDate: May 18, 2010 Time: 12:00 pm EDT, 9:00 am PDT, 17:00 GMT Product Family: PeopleSoft HCM & EBS HRMSClick here to register for this session....... ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • joomla ACL :: two groups permissions conflict

    - by semyon
    I have several user groups on my website, like: Site Staff Departments -- History department -- Physics department -- Foreign languages department -- IT department etc I also have several categories, like: News About ... Departments -- History department -- Physics department -- Foreign languages department -- IT department etc Users in Site Staff group can edit entire site, except for Departments categories (I've set Deny permission for it). Each Department user group can edit only its corresponding category. I have successfully implemented all this. The question is: If a user belongs to two groups (Site Staff and Physics department - for instance) - he should be able to edit the whole site, except for Departments category. And also he should be able to edit Physics department category - this is what I cannot implement. Can you suggest any ideas?

    Read the article

  • How To Get a Better Wireless Signal and Reduce Wireless Network Interference

    - by Chris Hoffman
    Like all sufficiently advanced technologies, Wi-Fi can feel like magic. But Wi-Fi isn’t magic – it’s radio waves. A variety of things can interfere with these radio waves, making your wireless connection weaker and more unreliable. The main keys to improving your wireless network’s signal are positioning your router properly — taking obstructions into account — and reducing interference from other wireless networks and household appliances. Image Credit: John Taylor on Flickr How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems 7 Ways To Free Up Hard Disk Space On Windows

    Read the article

  • Ubuntu Lock Screen Conflict with Google Chrome

    - by S.K.
    I don't know if this is a "feature" or not, but when I lock my screen in Ubuntu (GNOME 3), if Google Chrome needs to show a JavaScript alert (like if I set a reminder for an event in Google Calendar), Chrome will show up on top of the lock screen. If you'd like to simulate it, try running this JSFiddle, just click on the green box and lock your screen before the alert shows up - http://jsfiddle.net/skoshy/ZYSYr/ Anyone know how to fix/avoid this?

    Read the article

  • Gnome panel is crashed i think conflict with the setting of fglrx driver

    - by HappyLearner
    recently i installed Gnome Shell in Ubuntu 11.10 and when I login in gnome in-spite of unity everything is okay but the panel is discolored. i thought its due to fglrx driver so i uninstall it but it doesn’t do any good. when i login through Gnome, it logged into Gnome Classic. Also when I click the application on the top left part of the desktop, the desktop is some what disturbed with the large icons. is there any ways to solve these???

    Read the article

  • Upstart Script: Detect Shift Key Down At Boot

    - by bambuntu
    I want to create a boot up potential which allows a different upstart/runlevel configurations to load based upon specific key downs at boot (or combos). How do I detect a key down event with an upstart script? I'm offering a bounty. The deal is you must provide a very simple piece of working code to do this. I will immediately check the code and verify that it works. I'm on 10.04 if that helps. Alternative methods to achieve the same result are acceptable, i.e., if grub could somehow show entries that would indicate a type of boot, where that boot would cp appropriate files to /etc/init. So, instead of a keydown solution, it would be a boot menu item solution and the way to get grub to copy upstart scripts to /etc/init. If possible.

    Read the article

  • Windows 8: Paradigm Shift

    You've probably heard a lot about the loss of the Start button in Windows 8. While it isn't completely lost - you can still get to it via a convoluted path - its disappearance is merely a sign of the rethinking that went into the operating system's creation. Window 8's designers made certain assumptions while building the new system: Users will interact with the operating system predominantly through a touch interface. Users will do their computing on mobile devices, and may in fact use several different devices for the same purposes. They may even want to get work done on devices they do n...

    Read the article

  • How to capture shift-tab in vim

    - by Yogesh Arora
    I want to use shift-tab for auto completion and shifting code blocks visually. I have been referring to Make_Shift-Tab_work . That link talks about mapping ^[[Z to shift-tab . But i don't get ^[[Z when i press shift-tab. i just get a normal tab in that case. It then talks about using xmodmap -pke | grep 'Tab' to map tab keys. According to that the output should be keycode 23 = Tab or keycode 23 = Tab ISO_Left_Tab However i get keycode 22 = Tab KP_Tab if i use xmodmap -e 'keycode 22 = Tab ISO_Left_Tab' and after that xmodmap -pke | grep 'Tab', I still get keycode 22 = Tab KP_Tab This means running xmodmap -e 'keycode 22 = Tab ISO_Left_Tab' has no effect. In the end the link mentions using xev to see what X recieves when i press shift-tab. But i dont have xev on my system. Is there any other way i can capture shift-tab in vim

    Read the article

  • Conflict between two Javascripts (MailChimp validation etc. scripts & jQuery hSlides.js)

    - by Brian
    I have two scripts running on the same page, one is the jQuery.hSlides.js script http://www.jesuscarrera.info/demos/hslides/ and the other is a custom script that is used for MailChimp list signup integration. The hSlides panel can be seen in effect here: http://theatricalbellydance.com. I've turned off the MailChimp script because it was conflicting with the hSlides script, causing it not to to fail completely (as seen here http://theatricalbellydance.com/home2/). Can someone tell me what could be done to the hSlides script to stop the conflict with the MailChimp script? The MailChimp Script var fnames = new Array(); var ftypes = new Array(); fnames[0] = 'EMAIL'; ftypes[0] = 'email'; fnames[3] = 'MMERGE3'; ftypes[3] = 'text'; fnames[1] = 'FNAME'; ftypes[1] = 'text'; fnames[2] = 'LNAME'; ftypes[2] = 'text'; fnames[4] = 'MMERGE4'; ftypes[4] = 'address'; fnames[6] = 'MMERGE6'; ftypes[6] = 'number'; fnames[9] = 'MMERGE9'; ftypes[9] = 'text'; fnames[5] = 'MMERGE5'; ftypes[5] = 'text'; fnames[7] = 'MMERGE7'; ftypes[7] = 'text'; fnames[8] = 'MMERGE8'; ftypes[8] = 'text'; fnames[10] = 'MMERGE10'; ftypes[10] = 'text'; fnames[11] = 'MMERGE11'; ftypes[11] = 'text'; fnames[12] = 'MMERGE12'; ftypes[12] = 'text'; var err_style = ''; try { err_style = mc_custom_error_style; } catch (e) { err_style = 'margin: 1em 0 0 0; padding: 1em 0.5em 0.5em 0.5em; background: rgb(255, 238, 238) none repeat scroll 0% 0%; font-weight: bold; float: left; z-index: 1; width: 80%; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial; color: rgb(255, 0, 0);'; } var mce_jQuery = jQuery.noConflict(); mce_jQuery(document).ready(function ($) { var options = { errorClass: 'mce_inline_error', errorElement: 'div', errorStyle: err_style, onkeyup: function () {}, onfocusout: function () {}, onblur: function () {} }; var mce_validator = mce_jQuery("#mc-embedded-subscribe-form").validate(options); options = { url: 'http://theatricalbellydance.us1.list-manage.com/subscribe/post-json?u=1d127e7630ced825cb1a8b5a9&id=9f12d2a6bb&c=?', type: 'GET', dataType: 'json', contentType: "application/json; charset=utf-8", beforeSubmit: function () { mce_jQuery('#mce_tmp_error_msg').remove(); mce_jQuery('.datefield', '#mc_embed_signup').each(function () { var txt = 'filled'; var fields = new Array(); var i = 0; mce_jQuery(':text', this).each(function () { fields[i] = this; i++; }); mce_jQuery(':hidden', this).each(function () { if (fields[0].value == 'MM' && fields[1].value == 'DD' && fields[2].value == 'YYYY') { this.value = ''; } else if (fields[0].value == '' && fields[1].value == '' && fields[2].value == '') { this.value = ''; } else { this.value = fields[0].value + '/' + fields[1].value + '/' + fields[2].value; } }); }); return mce_validator.form(); }, success: mce_success_cb }; mce_jQuery('#mc-embedded-subscribe-form').ajaxForm(options); }); function mce_success_cb(resp) { mce_jQuery('#mce-success-response').hide(); mce_jQuery('#mce-error-response').hide(); if (resp.result == "success") { mce_jQuery('#mce-' + resp.result + '-response').show(); mce_jQuery('#mce-' + resp.result + '-response').html(resp.msg); mce_jQuery('#mc-embedded-subscribe-form').each(function () { this.reset(); }); } else { var index = -1; var msg; try { var parts = resp.msg.split(' - ', 2); if (parts[1] == undefined) { msg = resp.msg; } else { i = parseInt(parts[0]); if (i.toString() == parts[0]) { index = parts[0]; msg = parts[1]; } else { index = -1; msg = resp.msg; } } } catch (e) { index = -1; msg = resp.msg; } try { if (index == -1) { mce_jQuery('#mce-' + resp.result + '-response').show(); mce_jQuery('#mce-' + resp.result + '-response').html(msg); } else { err_id = 'mce_tmp_error_msg'; html = '<div id="' + err_id + '" style="' + err_style + '"> ' + msg + '</div>'; var input_id = '#mc_embed_signup'; var f = mce_jQuery(input_id); if (ftypes[index] == 'address') { input_id = '#mce-' + fnames[index] + '-addr1'; f = mce_jQuery(input_id).parent().parent().get(0); } else if (ftypes[index] == 'date') { input_id = '#mce-' + fnames[index] + '-month'; f = mce_jQuery(input_id).parent().parent().get(0); } else { input_id = '#mce-' + fnames[index]; f = mce_jQuery().parent(input_id).get(0); } if (f) { mce_jQuery(f).append(html); mce_jQuery(input_id).focus(); } else { mce_jQuery('#mce-' + resp.result + '-response').show(); mce_jQuery('#mce-' + resp.result + '-response').html(msg); } } } catch (e) { mce_jQuery('#mce-' + resp.result + '-response').show(); mce_jQuery('#mce-' + resp.result + '-response').html(msg); } } } The hslides script: /* * hSlides (1.0) // 2008.02.25 // <http://plugins.jquery.com/project/hslides> * * REQUIRES jQuery 1.2.3+ <http://jquery.com/> * * Copyright (c) 2008 TrafficBroker <http://www.trafficbroker.co.uk> * Licensed under GPL and MIT licenses * * hSlides is an horizontal accordion navigation, sliding the panels around to reveal one of interest. * * Sample Configuration: * // this is the minimum configuration needed * $('#accordion').hSlides({ * totalWidth: 730, * totalHeight: 140, * minPanelWidth: 87, * maxPanelWidth: 425 * }); * * Config Options: * // Required configuration * totalWidth: Total width of the accordion // default: 0 * totalHeight: Total height of the accordion // default: 0 * minPanelWidth: Minimum width of the panel (closed) // default: 0 * maxPanelWidth: Maximum width of the panel (opened) // default: 0 * // Optional configuration * midPanelWidth: Middle width of the panel (centered) // default: 0 * speed: Speed for the animation // default: 500 * easing: Easing effect for the animation. Other than 'swing' or 'linear' must be provided by plugin // default: 'swing' * sensitivity: Sensitivity threshold (must be 1 or higher) // default: 3 * interval: Milliseconds for onMouseOver polling interval // default: 100 * timeout: Milliseconds delay before onMouseOut // default: 300 * eventHandler: Event to open panels: click or hover. For the hover option requires hoverIntent plugin <http://cherne.net/brian/resources/jquery.hoverIntent.html> // default: 'click' * panelSelector: HTML element storing the panels // default: 'li' * activeClass: CSS class for the active panel // default: none * panelPositioning: Accordion panelPositioning: top -> first panel on the bottom and next on the top, other value -> first panel on the top and next to the bottom // default: 'top' * // Callback funtctions. Inside them, we can refer the panel with $(this). * onEnter: Funtion raised when the panel is activated. // default: none * onLeave: Funtion raised when the panel is deactivated. // default: none * * We can override the defaults with: * $.fn.hSlides.defaults.easing = 'easeOutCubic'; * * @param settings An object with configuration options * @author Jesus Carrera <[email protected]> */ (function($) { $.fn.hSlides = function(settings) { // override default configuration settings = $.extend({}, $.fn.hSlides.defaults, settings); // for each accordion return this.each(function(){ var wrapper = this; var panelLeft = 0; var panels = $(settings.panelSelector, wrapper); var panelPositioning = 1; if (settings.panelPositioning != 'top'){ panelLeft = ($(settings.panelSelector, wrapper).length - 1) * settings.minPanelWidth; panels = $(settings.panelSelector, wrapper).reverse(); panelPositioning = -1; } // necessary styles for the wrapper $(this).css('position', 'relative').css('overflow', 'hidden').css('width', settings.totalWidth).css('height', settings.totalHeight); // set the initial position of the panels var zIndex = 0; panels.each(function(){ // necessary styles for the panels $(this).css('position', 'absolute').css('left', panelLeft).css('zIndex', zIndex).css('height', settings.totalHeight).css('width', settings.maxPanelWidth); zIndex ++; // if this panel is the activated by default, set it as active and move the next (to show this one) if ($(this).hasClass(settings.activeClass)){ $.data($(this)[0], 'active', true); if (settings.panelPositioning != 'top'){ panelLeft = ($(settings.panelSelector, wrapper).index(this) + 1) * settings.minPanelWidth - settings.maxPanelWidth; }else{ panelLeft = panelLeft + settings.maxPanelWidth; } }else{ // check if we are centering and some panel is active // this is why we can't add/remove the active class in the callbacks: positioning the panels if we have one active if (settings.midPanelWidth && $(settings.panelSelector, wrapper).hasClass(settings.activeClass) == false){ panelLeft = panelLeft + settings.midPanelWidth * panelPositioning; }else{ panelLeft = panelLeft + settings.minPanelWidth * panelPositioning; } } }); // iterates through the panels setting the active and changing the position var movePanels = function(){ // index of the new active panel var activeIndex = $(settings.panelSelector, wrapper).index(this); // iterate all panels panels.each(function(){ // deactivate if is the active if ( $.data($(this)[0], 'active') == true ){ $.data($(this)[0], 'active', false); $(this).removeClass(settings.activeClass).each(settings.onLeave); } // set position of current panel var currentIndex = $(settings.panelSelector, wrapper).index(this); panelLeft = settings.minPanelWidth * currentIndex; // if the panel is next to the active, we need to add the opened width if ( (currentIndex * panelPositioning) > (activeIndex * panelPositioning)){ panelLeft = panelLeft + (settings.maxPanelWidth - settings.minPanelWidth) * panelPositioning; } // animate $(this).animate({left: panelLeft}, settings.speed, settings.easing); }); // activate the new active panel $.data($(this)[0], 'active', true); $(this).addClass(settings.activeClass).each(settings.onEnter); }; // center the panels if configured var centerPanels = function(){ var panelLeft = 0; if (settings.panelPositioning != 'top'){ panelLeft = ($(settings.panelSelector, wrapper).length - 1) * settings.minPanelWidth; } panels.each(function(){ $(this).removeClass(settings.activeClass).animate({left: panelLeft}, settings.speed, settings.easing); if ($.data($(this)[0], 'active') == true){ $.data($(this)[0], 'active', false); $(this).each(settings.onLeave); } panelLeft = panelLeft + settings.midPanelWidth * panelPositioning ; }); }; // event handling if(settings.eventHandler == 'click'){ $(settings.panelSelector, wrapper).click(movePanels); }else{ var configHoverPanel = { sensitivity: settings.sensitivity, interval: settings.interval, over: movePanels, timeout: settings.timeout, out: function() {} } var configHoverWrapper = { sensitivity: settings.sensitivity, interval: settings.interval, over: function() {}, timeout: settings.timeout, out: centerPanels } $(settings.panelSelector, wrapper).hoverIntent(configHoverPanel); if (settings.midPanelWidth != 0){ $(wrapper).hoverIntent(configHoverWrapper); } } }); }; // invert the order of the jQuery elements $.fn.reverse = function(){ return this.pushStack(this.get().reverse(), arguments); }; // default settings $.fn.hSlides.defaults = { totalWidth: 0, totalHeight: 0, minPanelWidth: 0, maxPanelWidth: 0, midPanelWidth: 0, speed: 500, easing: 'swing', sensitivity: 3, interval: 100, timeout: 300, eventHandler: 'click', panelSelector: 'li', activeClass: false, panelPositioning: 'top', onEnter: function() {}, onLeave: function() {} }; })(jQuery);

    Read the article

  • SQL SERVER – Reduce the Virtual Log Files (VLFs) from LDF file

    - by pinaldave
    Earlier, I wrote a quite note on SQL SERVER – Detect Virtual Log Files (VLF) in LDF. Because of this I got responses suggesting too many VLFs are bad for log file. This prompts to a simple question: “How many is ‘too many’ VLFs?” I suggest that you go and read an article written by Kimberly over here. I am sure that you are going to have a clear understanding of what a good number for your VLFs is from that article. If you have lots of VLFs, you can reduce them right away using the following method: (I am just attempting to write a working script over here.) USE AdventureWorks GO BACKUP LOG AdventureWorks TO DISK='d:\adtlog.bak' GO -- Get Logical file name of the log file sp_helpfile GO DBCC SHRINKFILE(AdventureWorks_Log,TRUNCATEONLY) GO ALTER DATABASE AdventureWorks MODIFY FILE (NAME = AdventureWorks_Log,SIZE = 1GB) GO DBCC LOGINFO GO Again, here I have assumed that your initial log size is 1 GB, but in reality you should select the number based on your own ideal size of the log file. If your log file grows to 10 GB every day, you may want to put the value as 10 GB. For accuracy, read what Kimberly’s original article says over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Python/YACC: Resolving a shift/reduce conflict

    - by Rosarch
    I'm using PLY. Here is one of my states from parser.out: state 3 (5) course_data -> course . (6) course_data -> course . course_list_tail (3) or_phrase -> course . OR_CONJ COURSE_NUMBER (7) course_list_tail -> . , COURSE_NUMBER (8) course_list_tail -> . , COURSE_NUMBER course_list_tail ! shift/reduce conflict for OR_CONJ resolved as shift $end reduce using rule 5 (course_data -> course .) OR_CONJ shift and go to state 7 , shift and go to state 8 ! OR_CONJ [ reduce using rule 5 (course_data -> course .) ] course_list_tail shift and go to state 9 I want to resolve this as: if OR_CONJ is followed by COURSE_NUMBER: shift and go to state 7 else: reduce using rule 5 (course_data -> course .) How can I fix my parser file to reflect this? Do I need to handle a syntax error by backtracking and trying a different rule?

    Read the article

  • Ever any performance different between Java >> and >>> right shift operators?

    - by Sean Owen
    Is there ever reason to think the (signed) and (unsigned) right bit-shift operators in Java would perform differently? I can't detect any difference on my machine. This is purely an academic question; it's never going to be the bottleneck I'm sure. I know: it's best to write what you mean foremost; use for division by 2, for example. I assume it comes down to which architectures have which operations implemented as an instruction.

    Read the article

  • Alt-Shift won't switch language in Microsoft Word

    - by ripper234
    I have Windows 7 RTM, Office 2007 SP1, and a computer with English and Hebrew languages installed. In most programs (e.g. notepad), left ALT-SHIFT switches from Hebrew to English and vice versa. In word, it also usually works, but sometimes pressing left ALT-SHIFT just won't do anything. Is this a bug in Windows ? Word?

    Read the article

  • vista issue with shift key

    - by Mr New
    for some reason when i press the shift key on my laptop keyboard is mapped to 'ctrl+v' which does a paste. i thought it was the keyboard but then i replaced it with a wireless one,it still gives me the same issue. can someone help me. i cant just turn off the shift key all together because i do need it at times. model 8510w; vista operating system

    Read the article

  • Why does the right-shift operator produce a zero instead of a one?

    - by mrt181
    Hi, i am teaching myself java and i work through the exercises in Thinking in Java. On page 116, exercise 11, you should right-shift an integer through all its binary positions and display each position with Integer.toBinaryString. public static void main(String[] args) { int i = 8; System.out.println(Integer.toBinaryString(i)); int maxIterations = Integer.toBinaryString(i).length(); int j; for (j = 1; j < maxIterations; j++) { i >>= 1; System.out.println(Integer.toBinaryString(i)); } In the solution guide the output looks like this: 1000 1100 1110 1111 When i run this code i get this: 1000 100 10 1 What is going on here. Are the digits cut off? I am using jdk1.6.0_20 64bit. The book uses jdk1.5 32bit.

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • Reduce weight in healthy way - Day 2

    - by krnites
    My second day of reducing weight and it seems most of the blog are correct in saying that you can reduce weight if your calorie consumption is less than what you burn. In one day I have lost 1 lbs without doing anything. My current weight is 177.4 lbs. Yesterday I ate small portion of dinner that I used to eat that also around 7 PM. Normally I eat my dinner around 10 PM and withing 2 hour of eating I go for sleep, but yesterday I ate around 7 PM and went for sleep only after 12.On my second day I have eaten noodles and 3 eggs in breakfast and sesame chicken ( I love it) and fried rice in lunch, I still have not gone for running but had plan to go for running and then swimming. I hope it will at least burn the calories that I had taken. On some site it was written that a normal men body needs around 2000 Calorie a day. So if I am eating less than 2000 calorie ( noodles + 3 eggs = 400+200, rice + sesame chicken = 1300, total = 1900) and burning around 300 calorie, my total calorie intake will be 1600 which is less than what my body needs. So most probably by tomorrow I should come under 176 lb bracket.Apart from counting the calorie that I am taking in everyday and approx number of calorie that I am burning everyday, I had also starting tracking my physical activities on my mobile. I have got a beautiful Samsung Focus S Windows 7.5 mobile. And after browsing through the market I have downloaded couple of health Apps.1. 6 Week training - this has set of exercise and lets you choose the number of sets you want to do for all exercise. Its focus on your core muscles.2. Fast food Calories - This apps has all the fast food chain listed and give the calorie count of each of the food item available on there menu. Like for Burger King's French Fries Large (Salted) contains 500 Calorie.3. Gym Pocket Guide - Contains instructions for different kind of exercise and tells a right way of doing them.4.  RunSat - kind of GPS based application. Its mark the distance you have run, shows the path you have taken on a map, total calorie burnt, laps completed. I love this apps.5. Stop Watch I also have noticed that If I am running in GYM and have television in front of me where a movie or serial is going on which I like,  I normally didn't notice the time. Most of the time running on treadmill is very boring, but if some music video is playing or some kind of sitcom is going, I can run for  a hour or half.So on day 2 I have lost 1 lbs and had learnt that calorie intake should be less then calorie burnt for a given day.

    Read the article

  • Reduce ERP Consolidation Risks with Oracle Master Data Management

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Reducing the Risk of ERP Consolidation starts first and foremost with your Data.This is nothing new; companies with multiple misaligned ERP systems are often putting inordinate risk on their business. It can translate to too much inventory, long lead times, and shipping issues from poorly organized and specified goods. And don’t forget the finance side! When goods are shipped and promises are kept/not kept there’s the issue of accounts. No single chart of counts translates to no accountability. So – I’ve decided. I need to consolidate! Well, you can’t consolidate ERP applications [for that matter any of your applications] without first considering your data. This means looking at how your data is being integrated by these ERP systems, how it is being synchronized, what information is being shared, or not being shared. Most importantly, making sure that the data is mastered. What is the best way to do this? In the recent webcast: Reduce ERP consolidation Risks with Oracle Master Data Management we outlined 3 key guidelines: #1: Consolidate your Product Data#2: Consolidate your Customer, Supplier (Party Data) #3: Consolidate your Financial Data Together these help customers achieve reduced risk, better customer intimacy, reducing inventory levels, elimination of product variations, and finally a single master chart of accounts. In the case of Oracle's customer Zebra Technologies, they were able to consolidate over 140 applications by mastering their data. Ultimately this gave them 60% cost savings for the year on IT spend. Oracle’s Solution for ERP Consolidation: Master Data Management Oracle's enterprise master data management (MDM) can play a big role in ERP consolidation. It includes a set of products that consolidates and maintains complete, accurate, and authoritative master data across the enterprise and distributes this master information to all operational and analytical applications as a shared service. It’s optimized to work with any application source (not just Oracle’s) and can integrate using technology from Oracle Fusion Middleware (i.e. GoldenGate for data synchronization and real-time replication or ODI with its E-LT optimized bulk data and transformation capability). In addition especially for ERP consolidation use cases it’s important to leverage the AIA and SOA capabilities as part of Fusion Middleware to connect these multiple applications together and relay the data into the correct hub. Oracle’s MDM strategy is a unique offering in the industry, one that has common elements across the top and bottom in Middleware, BI/DW, Engineered systems combined with Enterprise Data Quality to enable comprehensive Data Governance at all levels. In addition, Oracle MDM provides the best-in-class capabilities to master all variations of data, including customer, supplier, product, financial data. But ultimately at the center of Oracle MDM is your data, making it more trusted, making it secure and accessible as part of a role-based approach, and getting it to make sense to you in any situation, whether it’s a specific ERP process like we talked about or something that is custom to your organization. To learn more about these techniques in ERP consolidation watch our webcast or goto our Oracle MDM website at www.oracle.com/goto/mdm

    Read the article

  • Jquery UI Autocomplete widget conflict qith jqeury.menu widget - how can I solve it?

    - by JK
    My app already has a completed menu using jquery.menu.js found at http://wiki.jqueryui.com/Menu. I'm now also trying to add the jquery autocomplete widget from jquery ui 1.8.1 - but both of these have a .menu() function that conflicts with each other. If I put jquery-ui-1.8.1.custom.js first in the head, then autocomplete works but the menu does not. If I put jquery-menu.js first in the head, then the menu works but autocomplete doesnt. Is there a way to solve this without editing either plugin? (If I edit, I will probably lose those changes the next time either plugin is upgraded)

    Read the article

  • git rebase conflict was caused by which commit

    - by dorelal
    when I do git rebase master I get conflict sometimes. And sometimes it becomes very difficult to track down an issue even with error messages. It would be a real help if I could find out which commit git is trying to reapply and is causing conflict. How can I find out which commit is causing the conflict?

    Read the article

  • An XKB keyboard map that responds to the left and right shift key individually

    - by mbfisher
    First off, excuse my ignorance of X and XKB; I've been trying to hack together a solution in the hope of being able to achieve what I want without requiring a detailed grasp of it. I'm trying to create an XKB keyboard map on Ubuntu 12.04 that allows me to stipulate which of the two shift keys constitutes the Level2 modifier. Specifically, the 4 key should only produce a $ when the right shift is held, not the left. My reading so far: http://www.charvolant.org/~doug/xkb/html/node5.html http://people.uleth.ca/~daniel.odonnell/Blog/custom-keyboard-in-linuxx11 http://www.x.org/releases/X11R7.5/doc/input/XKB-Enhancing.html Lots of searching! I've attempted to define a custom type, and then refer to it explicitly in a symbols map: /usr/share/X11/xkb/types/mbfisher: default xkb_types "mbfisher" { type "RIGHT_SHIFT" { modifiers = None+Shift_R; map[None] = Level1; map[Shift_R] = Level2; }; } /usr/share/X11/xkb/symbols/mbfisher: default partial alphanumeric_keys xkb_symbols "basic" { name[Group1]= "mbfisher"; key <AE04> { type= "RIGHT_SHIFT", symbols[Group1]= [ 4, dollar ] }; }; I'm then selecting the map with the Ubuntu Keyboard Layout GUI. This obviously disables the alphanumeric keyboard apart from the 4 key, but the dollar sign can still be typed with either shift key. I'm conscious of writing a massive question with lots of useless information so I'll stop here; please ask for anything I've missed out. Any ideas?

    Read the article

  • How to Reduce the Size of Your WinSXS Folder on Windows 7 or 8

    - by Chris Hoffman
    The WinSXS folder at C:\Windows\WinSXS is massive and continues to grow the longer you have Windows installed. This folder builds up unnecessary files over time, such as old versions of system components. This folder also contains files for uninstalled, disabled Windows components. Even if you don’t have a Windows component installed, it will be present in your WinSXS folder, taking up space. Why the WinSXS Folder Gets to Big The WinSXS folder contains all Windows system components. In fact, component files elsewhere in Windows are just links to files contained in the WinSXS folder. The WinSXS folder contains every operating system file. When Windows installs updates, it drops the new Windows component in the WinSXS folder and keeps the old component in the WinSXS folder. This means that every Windows Update you install increases the size of your WinSXS folder. This allows you to uninstall operating system updates from the Control Panel, which can be useful in the case of a buggy update — but it’s a feature that’s rarely used. Windows 7 dealt with this by including a feature that allows Windows to clean up old Windows update files after you install a new Windows service pack. The idea was that the system could be cleaned up regularly along with service packs. However, Windows 7 only saw one service pack — Service Pack 1 — released in 2010. Microsoft has no intention of launching another. This means that, for more than three years, Windows update uninstallation files have been building up on Windows 7 systems and couldn’t be easily removed. Clean Up Update Files To fix this problem, Microsoft recently backported a feature from Windows 8 to Windows 7. They did this without much fanfare — it was rolled out in a typical minor operating system update, the kind that don’t generally add new features. To clean up such update files, open the Disk Cleanup wizard (tap the Windows key, type “disk cleanup” into the Start menu, and press Enter). Click the Clean up System Files button, enable the Windows Update Cleanup option and click OK. If you’ve been using your Windows 7 system for a few years, you’ll likely be able to free several gigabytes of space. The next time you reboot after doing this, Windows will take a few minutes to clean up system files before you can log in and use your desktop. If you don’t see this feature in the Disk Cleanup window, you’re likely behind on your updates — install the latest updates from Windows Update. Windows 8 and 8.1 include built-in features that do this automatically. In fact, there’s a StartComponentCleanup scheduled task included with Windows that will automatically run in the background, cleaning up components 30 days after you’ve installed them. This 30-day period gives you time to uninstall an update if it causes problems. If you’d like to manually clean up updates, you can also use the Windows Update Cleanup option in the Disk Usage window, just as you can on Windows 7. (To open it, tap the Windows key, type “disk cleanup” to perform a search, and click the “Free up disk space by removing unnecessary files” shortcut that appears.) Windows 8.1 gives you more options, allowing you to forcibly remove all previous versions of uninstalled components, even ones that haven’t been around for more than 30 days. These commands must be run in an elevated Command Prompt — in other words, start the Command Prompt window as Administrator. For example, the following command will uninstall all previous versions of components without the scheduled task’s 30-day grace period: DISM.exe /online /Cleanup-Image /StartComponentCleanup The following command will remove files needed for uninstallation of service packs. You won’t be able to uninstall any currently installed service packs after running this command: DISM.exe /online /Cleanup-Image /SPSuperseded The following command will remove all old versions of every component. You won’t be able to uninstall any currently installed service packs or updates after this completes: DISM.exe /online /Cleanup-Image /StartComponentCleanup /ResetBase Remove Features on Demand Modern versions of Windows allow you to enable or disable Windows features on demand. You’ll find a list of these features in the Windows Features window you can access from the Control Panel. Even features you don’t have installed — that is, the features you see unchecked in this window — are stored on your hard drive in your WinSXS folder. If you choose to install them, they’ll be made available from your WinSXS folder. This means you won’t have to download anything or provide Windows installation media to install these features. However, these features take up space. While this shouldn’t matter on typical computers, users with extremely low amounts of storage or Windows server administrators who want to slim their Windows installs down to the smallest possible set of system files may want to get these files off their hard drives. For this reason, Windows 8 added a new option that allows you to remove these uninstalled components from the WinSXS folder entirely, freeing up space. If you choose to install the removed components later, Windows will prompt you to download the component files from Microsoft. To do this, open a Command Prompt window as Administrator. Use the following command to see the features available to you: DISM.exe /Online /English /Get-Features /Format:Table You’ll see a table of feature names and their states. To remove a feature from your system, you’d use the following command, replacing NAME with the name of the feature you want to remove. You can get the feature name you need from the table above. DISM.exe /Online /Disable-Feature /featurename:NAME /Remove If you run the /GetFeatures command again, you’ll now see that the feature has a status of “Disabled with Payload Removed” instead of just “Disabled.” That’s how you know it’s not taking up space on your computer’s hard drive. If you’re trying to slim down a Windows system as much as possible, be sure to check out our lists of ways to free up disk space on Windows and reduce the space used by system files.     

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >