Search Results

Search found 97982 results on 3920 pages for 'page file'.

Page 1333/3920 | < Previous Page | 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340  | Next Page >

  • Nivida driver install

    - by Adham
    tried black listing and every thing i could find on the internet used with root and alt+ctrl+f1 with no solution nvidia-installer log file '/var/log/nvidia-installer.log' creation time: Mon Jun 17 08:35:25 2013 installer version: 319.23 PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin nvidia-installer command line: ./nvidia-installer Using: nvidia-installer ncurses user interface -> License accepted. -> Installing NVIDIA driver version 319.23. -> Running distribution scripts executing: '/usr/lib/nvidia/pre-install'... -> done. -> The distribution-provided pre-install script failed! Continue installation anyway? (Answer: Yes) ERROR: The Nouveau kernel driver is currently in use by your system. This driver is incompatible with the NVIDIA driver, and must be disabled before proceeding. Please consult the NVIDIA driver README and your Linux distribution's documentation for details on how to correctly disable the Nouveau kernel driver. WARNING: One or more modprobe configuration files to disable Nouveau are already present at: /etc/modprobe.d/nvidia-installer-disable-nouveau.conf. Please be sure you have rebooted your system since these files were written. If you have rebooted, then Nouveau may be enabled for other reasons, such as being included in the system initial ramdisk or in your X configuration file. Please consult the NVIDIA driver README and your Linux distribution's documentation for details on how to correctly disable the Nouveau kernel driver. ERROR: Installation has failed. Please see the file '/var/log/nvidia-installer.log' for details. You may find suggestions on fixing installation problems in the README available on the Linux driver download page at www.nvidia.com.

    Read the article

  • Resolution changes when using switch

    - by Edward D
    So, the "real" resolution of my monitor is 1024x768. That's what I'd use on my docked Windows laptop, and what I'd use on my Xubuntu desktop connected directly. When I connect a switch, to switch between the two, however, the ubuntu machine's resolution changes. Everything's still proportional, and it still thinks it's doing 1024x768, but the icons and fonts appear larger. Not 800x600 larger, but still big. When I used Xubuntu Precise, I created an xorg.conf file to set a resolution of 1280x1024 which made it look the way it does without the switch ... as a workaround. When I upgraded to Trusty, I lost this. I tried to re-create it, but doesn't seem to load my file. Ideally, I'd like to correct the original problem, but I'd settle for being able to up the resolution. I searched for a while, and tried to do it, but I'm giving up ... please help me out. Controller: Intel Corporation 82915G/P/GV/GL/PL/910GL Memory Controller Hub (rev 04) Monitor: NEC MultiSync LCD 1850e http://www.necdisplay.com/documents/UserManuals/LCD1850E_manual.pdf OS: Xubuntu 14.04 Trusty /etc/X11/xorg.conf: # YOU CREATED THIS FILE # sudo leafpad /etc/X11/xorg.conf Section "Device" Identifier "Configured Video Device" EndSection Section "Monitor" Identifier "NEC LCD1850E" # I found Synchronization Range at: # http://www.necdisplay.com/documents/UserManuals/LCD1850E_manual.pdf HorizSync 31.0-82.0 VertRefresh 55.0-85.0 EndSection Section "Screen" Identifier "Default Screen" Monitor "NEC LCD1850E" Device "Configured Video Device" SubSection "Display" Depth 24 Modes "1280x1024" "1280x960" "1024x768" "800x600" "848x480" "640x480" EndSubSection EndSection

    Read the article

  • Bizarre SSH Problem - It won't even start

    - by thallium85
    I recently got Ubuntu 12.04 Precise, got it up and running with some MediaWiki software, static IP on the box and router and was able to access the main page even from a cell phone. Everything seemed great... Then I wanted to finally get rid of the monitor and keyboard and login remotely via SSH. I installed openssh-server, let everything point to port 22 for a test run and installed putty on my Windows XP machine. I got a connection refused. Went back and started checking the Ubuntu install itself... (I'm under root from this point on) $ sudo -s $ service ssh status ssh stop/waiting $ service ssh start ssh start/running, process 2212 $ service ssh status ssh stop/waiting Apparently ssh has stopped or is waiting for something.... $ ssh localhost ssh: connect to host localhost port 22: Connection refused I can't even connect to myself... I checked ufw (firewall) to see if port 22 is doing alright... $ sudo ufw status Status: active To Action From 22 ALLOW Anywhere 22/tcp ALLOW Anywhere 22 ALLOW Anywhere (v6) 22/tcp ALLOW Anywhere (v6) sshd_config shows only Port 22 Is ssh not using the right IP address at all? I just don't get what I did wrong here. When this is up and running I will def change the port number, but for now, I don't want to mess with the default install too much until a test run with putty is successful. Edit: Here are my sshd_config file and my ssh_config file. The command /usr/sbin/sshd -p 22 -D -d -e returns: /etc/ssh/sshd_config line 159: Subsystem 'sftp' already defined. Edit: @phoibus moving the sshd_config file and reinstalling did the trick! service ssh status the above command shows that ssh is now running and I am now able to log in from my windows xp computer remotely via putty. Thanks so much! I can now use my monitor for other things!

    Read the article

  • Parsing mathematical experssions with two values that have parenthesis and minus signs

    - by user45921
    I'm trying to parse equations like these which only has two values or the square root of a certain value from a text file: 100+100 -100-100 -(100)+(-100) sqrt(100) by the minues signs, parenthesis and the operator symbol in the middle and the square root, and I have no idea how to start off... I've got the file part done and the simple calculation parts except that I couldnt get my program to solve the equations in the above. #include <stdio.h> #include <string.h> #include <stdlib.h> #include <math.h> main(){ FILE *fp; char buff[255], sym,sym2,del1,del2,del3,del4; double num1, num2; int ret; fp = fopen("input.txt","r"); while(fgets(buff,sizeof(buff),fp)!=NULL){ char *tok = buff; sscanf(tok,"%lf%c%lf",&num1,&sym,&num2); switch(sym){ case '+': printf("%lf\n", num1+num2); break; case '-': printf("%lf\n", num1-num2); break; case '*': printf("%lf\n", num1*num2); break; case '/': printf("%lf\n", num1/num2); break; default: printf("The input value is not correct\n"); break; } } fclose(fp); } that is what have I written for the other basic operations without parenthesis and the minus sign for the second value and it works great for the simple ones. I'm using a switch method to calculate the add, sub, mul and divide but I'm not sure how to properly use the sscanf function (if I am not using it properly) or if there is another way using a function like strtok to properly parse the parenthesis and the minus signs. Any kind help?

    Read the article

  • Terminal/Putty showing control characters (^M) after updating

    - by jaycee48
    I updated my system yesterday afternoon using the recommended updates from Update Manager. After it completed, I shutdown my system and went home for the day. I come in this morning and I am getting control characters displayed when using vi in both the standard terminal emulator, putty, and putty inside of my Windows virtual that I run with VirtualBox. I have made no other system changes and I cannot figure out how this occurred. It's as if every text file I have was created in DOS. I've searched the forums and I haven't found any answers. I am using xterm as my emulator and I checked with 3 of my coworkers and none of them are having this problem so we do not believe it is a server side issue. Especially since I've checked 3 different servers. There's nothing in my .profile other than PATH variables so I'm using the same terminal settings as everyone else. Some files are fine (I can open and read both /etc/environment and my .profile) but most of any kind of server generated log file is trash. Running cat or head on the same file displays the contents without the characters. Any ideas would be appreciated. Thanks.

    Read the article

  • AJAX call + JQuery Dialog Title changing dynamically

    - by Panther24
    I have an AJAX call which loads an Dialog page, now depending upon the content of the data returned on the AJAX call, I want to change the title of the Window, how do I do that. Here is the snippet of code: var divid = "brHistoryForm"; var url = "getRestoreFiles.action?profileName="+profileName; // Create xmlHttp var xmlHttp = AJAX(); // The code... xmlHttp.onreadystatechange=function(){ if(xmlHttp.readyState==4){ document.getElementById(divid).innerHTML=xmlHttp.responseText; } } xmlHttp.open("GET",url,true); xmlHttp.send(null); $('#brHistoryForm').dialog('open'); jQuery('#brHistoryForm').focus(); document.getElementById('pageTitle').innerHTML = "<h2>"+profileName+" - B&R History</h2>" Here 'pageTitle' is a div. When I run the above piece of code, it opens a dialog window, the action is redirected to a jsp page which is loaded inside the div. It works fine, but the title does not get set :(. I've tried to do the setting of the title in the redirected jsp page, it doesn't work either. Here is the code of that JSP page: <%@page contentType="text/html" pageEncoding="UTF-8"%> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <%@ taglib prefix="s" uri="/struts-tags" %> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>B&R History</title> </head> <body> <table style="width: 100%"> <tr> <td style="width: 40%"> <div id="pageTitle"> <h2>B&R History</h2> </div> </td> </tr> </table> <table id="diskTable" cellpadding="0" cellspacing="0" border="1" class="display"> <thead> <tr> <th>Select</th><th>Operation</th> <th>Source</th><th>Destination</th> <th>Status</th><th>Start Time</th><th>End Time</th> </tr> </thead> <tfoot></tfoot> <tbody> <s:iterator value="restoreFileList"> <tr> <td> <s:if test="%{status.equals('Finished')}"> <input onClick="loadRestoreForm('<s:property value="name"/>', '<s:property value="to_path"/>', '<s:property value="status"/>')" type="radio" name='chk' id="chk" value="<s:property value='id'/>,<s:property value="status"/>" > </s:if> <s:else> <input type="radio" name='chk' id="chk" value="<s:property value='id'/>,<s:property value="status"/>" disabled> </s:else> </td> <td> <s:if test="%{br_type == 0}"> Backup </s:if> <s:else> Restore </s:else> </td> <td><s:property value="from_path"/></td> <td><s:property value="to_path"/></td> <td><s:property value="status"/></td> <td><s:property value="start_time"/></td> <td><s:property value="end_time"/></td> </tr> </s:iterator> </tbody> </table> </body> </html> Any help would be appreciated.

    Read the article

  • Password protect an alias virtual difrecory

    - by Jason
    I have a main domain being hosted through CPanel. I also have a sub-domain that I would like to appear as a path under the main domain instead of as a sub-domain. So I have: http://example.com/ pointing to the main hosted file. http://example.com/mydir pointing to the subdomain files. This is achieved by a httpd.conf include from the main domain section to set an alias: alias /mydir /path/to/subdomain/files/ Now, that works fine so far. The problem is that if a .htaccess file under /path/to/the/subdomain/files/ contains an error, the alias is completely skipped, and /mydir goes instead to the main host files. That is kind of surprising to me - I would expect an error to return an error instead. Now the killer: if I try to password protect /path/to/subdomain/files/, then trying to access http://example.com/mydir will again attempt to deliver from under the main hosted files and not from /path/to/subdomain/files/ I am not seeing any errors reported on the .htaccess file in the apache error log, so I am assuming the .htaccess is valid: AuthUserFile /path/to/valid/readable/.htpasswd AuthName "Secure Access" AuthType Basic Require valid-user This kind of behaviour does not seem right to me. Is there something obvious that could be causing it? Or is this just the way it works? Perhaps using an alias is the wrong way to go?

    Read the article

  • How to Implement Project Type "Copy", "Move", "Rename", and "Delete"

    - by Geertjan
    You've followed the NetBeans Project Type Tutorial and now you'd like to let the user copy, move, rename, and delete the projects conforming to your project type. When they right-click a project, they should see the relevant menu items and those menu items should provide dialogs for user interaction, followed by event handling code to deal with the current operation. Right now, at the end of the tutorial, the "Copy" and "Delete" menu items are present but disabled, while the "Move" and "Rename" menu items are absent: The NetBeans Project API provides a built-in mechanism out of the box that you can leverage for project-level "Copy", "Move", "Rename", and "Delete" actions. All the functionality is there for you to use, while all that you need to do is a bit of enablement and configuration, which is described below. To get started, read the following from the NetBeans Project API: http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/ActionProvider.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/CopyOperationImplementation.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/MoveOrRenameOperationImplementation.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/DeleteOperationImplementation.html Now, let's do some work. For each of the menu items we're interested in, we need to do the following: Provide enablement and invocation handling in an ActionProvider implementation. Provide appropriate OperationImplementation classes. Add the new classes to the Project Lookup. Make the Actions visible on the Project Node. Run the application and verify the Actions work as you'd like. Here we go: Create an ActionProvider. Here you specify the Actions that should be supported, the conditions under which they should be enabled, and what should happen when they're invoked, using lots of default code that lets you reuse the functionality provided by the NetBeans Project API: class CustomerActionProvider implements ActionProvider { @Override public String[] getSupportedActions() { return new String[]{ ActionProvider.COMMAND_RENAME, ActionProvider.COMMAND_MOVE, ActionProvider.COMMAND_COPY, ActionProvider.COMMAND_DELETE }; } @Override public void invokeAction(String string, Lookup lkp) throws IllegalArgumentException { if (string.equalsIgnoreCase(ActionProvider.COMMAND_RENAME)) { DefaultProjectOperations.performDefaultRenameOperation( CustomerProject.this, ""); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_MOVE)) { DefaultProjectOperations.performDefaultMoveOperation( CustomerProject.this); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_COPY)) { DefaultProjectOperations.performDefaultCopyOperation( CustomerProject.this); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_DELETE)) { DefaultProjectOperations.performDefaultDeleteOperation( CustomerProject.this); } } @Override public boolean isActionEnabled(String command, Lookup lookup) throws IllegalArgumentException { if ((command.equals(ActionProvider.COMMAND_RENAME))) { return true; } else if ((command.equals(ActionProvider.COMMAND_MOVE))) { return true; } else if ((command.equals(ActionProvider.COMMAND_COPY))) { return true; } else if ((command.equals(ActionProvider.COMMAND_DELETE))) { return true; } return false; } } Importantly, to round off this step, add "new CustomerActionProvider()" to the "getLookup" method of the project. If you were to run the application right now, all the Actions we're interested in would be enabled (if they are visible, as described in step 4 below) but when you invoke any of them you'd get an error message because each of the DefaultProjectOperations above looks in the Lookup of the Project for the presence of an implementation of a class for handling the operation. That's what we're going to do in the next step. Provide Implementations of Project Operations. For each of our operations, the NetBeans Project API lets you implement classes to handle the operation. The dialogs for interacting with the project are provided by the NetBeans project system, but what happens with the folders and files during the operation can be influenced via the operations. Below are the simplest possible implementations, i.e., here we assume we want nothing special to happen. Each of the below needs to be in the Lookup of the Project in order for the operation invocation to succeed. private final class CustomerProjectMoveOrRenameOperation implements MoveOrRenameOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyRenaming() throws IOException { } @Override public void notifyRenamed(String nueName) throws IOException { } @Override public void notifyMoving() throws IOException { } @Override public void notifyMoved(Project original, File originalPath, String nueName) throws IOException { } } private final class CustomerProjectCopyOperation implements CopyOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyCopying() throws IOException { } @Override public void notifyCopied(Project prjct, File file, String string) throws IOException { } } private final class CustomerProjectDeleteOperation implements DeleteOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyDeleting() throws IOException { } @Override public void notifyDeleted() throws IOException { } } Also make sure to put the above methods into the Project Lookup. Check the Lookup of the Project. The "getLookup()" method of the project should now include the classes you created above, as shown in bold below: @Override public Lookup getLookup() { if (lkp == null) { lkp = Lookups.fixed(new Object[]{ this, new Info(), new CustomerProjectLogicalView(this), new CustomerCustomizerProvider(this), new CustomerActionProvider(), new CustomerProjectMoveOrRenameOperation(), new CustomerProjectCopyOperation(), new CustomerProjectDeleteOperation(), new ReportsSubprojectProvider(this), }); } return lkp; } Make Actions Visible on the Project Node. The NetBeans Project API gives you a number of CommonProjectActions, including for the actions we're dealing with. Make sure the items in bold below are in the "getActions" method of the project node: @Override public Action[] getActions(boolean arg0) { return new Action[]{ CommonProjectActions.newFileAction(), CommonProjectActions.copyProjectAction(), CommonProjectActions.moveProjectAction(), CommonProjectActions.renameProjectAction(), CommonProjectActions.deleteProjectAction(), CommonProjectActions.customizeProjectAction(), CommonProjectActions.closeProjectAction() }; } Run the Application. When you run the application, you should see this: Let's now try out the various actions: Copy. When you invoke the Copy action, you'll see the dialog below. Provide a new project name and location and then the copy action is performed when the Copy button is clicked below: The message you see above, in red, might not be relevant to your project type. When you right-click the application and choose Branding, you can find the string in the Resource Bundles tab, as shown below: However, note that the message will be shown in red, no matter what the text is, hence you can really only put something like a warning message there. If you have no text at all, it will also look odd.If the project has subprojects, the copy operation will not automatically copy the subprojects. Take a look here and here for similar more complex scenarios. Move. When you invoke the Move action, the dialog below is shown: Rename. The Rename Project dialog below is shown when you invoke the Rename action: I tried it and both the display name and the folder on disk are changed. Delete. When you invoke the Delete action, you'll see this dialog: The checkbox is not checkable, in the default scenario, and when the dialog above is confirmed, the project is simply closed, i.e., the node hierarchy is removed from the application. However, if you truly want to let the user delete the project on disk, pass the Project to the DeleteOperationImplementation and then add the children of the Project you want to delete to the getDataFiles method: private final class CustomerProjectDeleteOperation implements DeleteOperationImplementation { private final CustomerProject project; private CustomerProjectDeleteOperation(CustomerProject project) { this.project = project; } @Override public List<FileObject> getDataFiles() { List<FileObject> files = new ArrayList<FileObject>(); FileObject[] projectChildren = project.getProjectDirectory().getChildren(); for (FileObject fileObject : projectChildren) { addFile(project.getProjectDirectory(), fileObject.getNameExt(), files); } return files; } private void addFile(FileObject projectDirectory, String fileName, List<FileObject> result) { FileObject file = projectDirectory.getFileObject(fileName); if (file != null) { result.add(file); } } @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyDeleting() throws IOException { } @Override public void notifyDeleted() throws IOException { } } Now the user will be able to check the checkbox, causing the method above to be called in the DeleteOperationImplementation: Hope this answers some questions or at least gets the discussion started. Before asking questions about this topic, please take the steps above and only then attempt to apply them to your own scenario. Useful implementations to look at: http://kickjava.com/src/org/netbeans/modules/j2ee/clientproject/AppClientProjectOperations.java.htm https://kenai.com/projects/nbandroid/sources/mercurial/content/project/src/org/netbeans/modules/android/project/AndroidProjectOperations.java

    Read the article

  • StyleCop Custom Rules

    - by Aligned
    There are several blogs on how to do this (http://scottwhite.blogspot.com/2008/11/creating-custom-stylecop-rules-in-c.html, etc). I’ve found a few useful things to point out: Debugging is difficult, but here are the steps (thanks to Tintin’s answer). “One way: 1) Delete your custom rules 2) Open Visual Studio (for dev), open your custom rule solution 3) Build & Deploy custom rules (a PostBuild action to copy the rules into the StyleCop folder is handy) 4) Open Visual Studio (for test) 5) Use VS (dev) and Attach to process devenv.exe (the test VS instance), set breakpoints in the rules you want to debug 6) Use VS’ (test) and right-click on project, Run StyleCop 7) Debug” ~ it worked once, now I’m having problems getting it to work again ~ I also get the message “Cannot evaluate expression because the code of the current method is optimized.” when I try to look at properties. Looking at the source code of the StyleCop.CSharp.Rules.dll that comes with the install. I used JustDecompile from Telerik. Create one xml file and name it the same as the one cs file (CodingGuildelineRules.cs and CodingGuidelinRules.xml) Deploy: 1. Build in Visual Studio 2. Close Visual Studio (Style cop is running so you can’t override your dll without closing) 3. Copy the dll from the bin to the C: \Program Files (x86)\StyleCop 4.7\ 4. Open the settings file or re-open Visual Studio

    Read the article

  • Is a "model" branch a common practice?

    - by dukeofgaming
    I just thought it could be a good thing to have a dedicated version control branch for all database schema changes and I wanted to know if anyone else is doing the same and what have the results been. Say that you are working with: Schema model/documentation (some file where you model the database visually to generate the schema source, say MySQL Workbench, with a .mwb file, which is binary) Schema source (a .sql file) Schema-based code generation The normal way we were working was with feature branches, so we would do changes to the model files (the database specific ones), and then have to regenerate points 2 and 3, dealing with the possible conflicts (or even code rewriting). Now say that your workflow goes the same way as the previous item numbering. With a model branch you wouldn't have to reconcile the schema model with binaries in other feature branches, or have to regenerate schema source and regenerate code (which might have human code on top of it). It makes so much sense to me it feels weird not having seen this earlier as a common practice. Edit: I'm counting on branch merges to be the assertions for the model matching the code. I use a DVCS, so I don't fear long-lived branches or scary-looking merges. I'm also doing feature branching.

    Read the article

  • Controlling the Sizing of the af:messages Dialog

    - by Duncan Mills
    Over the last day or so a small change in behaviour between 11.1.2.n releases of ADF and earlier versions has come to my attention. This has concerned the default sizing of the dialog that the framework automatically generates to handle the display of JSF messages being handled by the <af:messages> component. Unlike a normal popup, you don't have a physical <af:dialog> or <af:window> to set the sizing on in your page definition, so you're at the mercy of what the framework provides. In this case the framework now defines a fixed 250x250 pixel content area dialog for these messages, which can look a bit weird if the message is either very short, or very long. Unfortunately this is not something that you can control through the skin, instead you have to be a little more creative. Here's the solution I've come up with.  Unfortunately, I've not found a supportable way to reset the dialog so as to say  just size yourself based on your contents, it is actually possible to do this by tweaking the correct DOM objects, but I wanted to start with a mostly supportable solution that only uses the best practice of working through the ADF client side APIs. The Technique The basic approach I've taken is really very simple.  The af:messages dialog is just a normal richDialog object, it just happens to be one that is pre-defined for you with a particular known name "msgDlg" (which hopefully won't change). Knowing this, you can call the accepted APIs to control the content width and height of that dialog, as our meerkat friends would say, "simples" 1 The JavaScript For this example I've defined three JavaScript functions.   The first does all the hard work and is designed to be called from server side Java or from a page load event to set the default. The second is a utility function used by the first to validate the values you're about to use for height and width. The final function is one that can be called from the page load event to set an initial default sizing if that's all you need to do. Function resizeDefaultMessageDialog() /**  * Function that actually resets the default message dialog sizing.  * Note that the width and height supplied define the content area  * So the actual physical dialog size will be larger to account for  * the chrome containing the header / footer etc.  * @param docId Faces component id of the document  * @param contentWidth - new content width you need  * @param contentHeight - new content height  */ function resizeDefaultMessageDialog(docId, contentWidth, contentHeight) {   // Warning this value may change from release to release   var defMDName = "::msgDlg";   //Find the default messages dialog   msgDialogComponent = AdfPage.PAGE.findComponentByAbsoluteId(docId + defMDName); // In your version add a check here to ensure we've found the right object!   // Check the new width is supplied and is a positive number, if so apply it.   if (dimensionIsValid(contentWidth)){       msgDialogComponent.setContentWidth(contentWidth);   }   // Check the new height is supplied and is a positive number, if so apply it.   if (dimensionIsValid(contentHeight)){       msgDialogComponent.setContentHeight(contentHeight);   } }  Function dimensionIsValid()  /**  * Simple function to check that sensible numeric values are   * being proposed for a dimension  * @param sampleDimension   * @return booolean  */ function dimensionIsValid(sampleDimension){     return (!isNaN(sampleDimension) && sampleDimension > 0); } Function  initializeDefaultMessageDialogSize() /**  * This function will re-define the default sizing applied by the framework   * in 11.1.2.n versions  * It is designed to be called with the document onLoad event  */ function initializeDefaultMessageDialogSize(loadEvent){   //get the configuration information   var documentId = loadEvent.getSource().getProperty('documentId');   var newWidth = loadEvent.getSource().getProperty('defaultMessageDialogContentWidth');   var newHeight = loadEvent.getSource().getProperty('defaultMessageDialogContentHeight');   resizeDefaultMessageDialog(documentId, newWidth, newHeight); } Wiring in the Functions As usual, the first thing we need to do when using JavaScript with ADF is to define an af:resource  in the document metaContainer facet <af:document>   ....     <f:facet name="metaContainer">     <af:resource type="javascript" source="/resources/js/hackMessagedDialog.js"/>    </f:facet> </af:document> This makes the script functions available to call.  Next if you want to use the option of defining an initial default size for the dialog you use a combination of <af:clientListener> and <af:clientAttribute> tags like this. <af:document title="MyApp" id="doc1">   <af:clientListener method="initializeDefaultMessageDialogSize" type="load"/>   <af:clientAttribute name="documentId" value="doc1"/>   <af:clientAttribute name="defaultMessageDialogContentWidth" value="400"/>   <af:clientAttribute name="defaultMessageDialogContentHeight" value="150"/>  ...   Just in Time Dialog Sizing  So  what happens if you have a variety of messages that you might add and in some cases you need a small dialog and an other cases a large one? Well in that case you can re-size these dialogs just before you submit the message. Here's some example Java code: FacesContext ctx = FacesContext.getCurrentInstance();          //reset the default dialog size for this message ExtendedRenderKitService service =              Service.getRenderKitService(ctx, ExtendedRenderKitService.class); service.addScript(ctx, "resizeDefaultMessageDialog('doc1',100,50);");          FacesMessage msg = new FacesMessage("Short message"); msg.setSeverity(FacesMessage.SEVERITY_ERROR); ctx.addMessage(null, msg);  So there you have it. This technique should, at least, allow you to control the dialog sizing just enough to stop really objectionable whitespace or scrollbars. 1 Don't worry if you don't get the reference, lest's just say my kids watch too many adverts.

    Read the article

  • What is the best approach to solve a factory method problem which has to be an instance?

    - by Iago
    I have to add new funcionality in a web service legacy project and I'm thinking what is the best approach for a concrete situation. The web service is simple: It receives a XML file, unmarshalling, generates response's objects, marshalling and finally it sends the response as a XML file. For every XML files received, the web service always responds with the same XML structure. What I have to do is to generate a different XML file according to the XML received. So I have a controller class which has all marshalling/unmarshalling operations, but this controller class has to be an instance. Depending on XML received I need some marshalling methods or others. Trying to make few changes on legacy source, what is the best approach? My first approach was to do a factory method pattern with the controller class, but this class has to be an instance. I want to keep, as far as it goes, this structure: classController.doMarshalling(); I think this one is a bit smelly: if(XMLReceived.isTypeOne()) classController.doMarshallingOne(); else if(XMLReceived.isTypeTwo()) classController.doMarshallingTwo(); else if(XMLReceived.isTypeThree()) classController.doMarshallingThree(); else if ... I hope my question is well understood

    Read the article

  • Installing 13.04 on an EFI partition - Share with Windows 8?

    - by mengelkoch
    Information I've found here suggests that for my system, I need to install 13.04 into an EFI-type partition, since it needs to boot as UEFI. I also understand it is advisable to have only ONE EFI partition on the disk; I've read here that it is OK for Ubuntu and Windows to share the same partition (please confirm). When I try to install into the existing EFI drive, I get the message "No root file system is defined. Please correct from partitioning menu." Do I change the EFI boot partition to another type? Doesn't that defeat the purpose? If I change it to Ext4 Journaling File System, I am given the opportunity to define the '/' Mount point. I haven't proceeded beyond this point for fear I am going to destroy Windows 8 by altering this partition. BTW, I created three partitions in Windows before installing, per the helpful response to my previous question. But if I try to install into the partition I created for Ubuntu, I get the "No root file system..." error again.

    Read the article

  • Cleaning Up After Chrome

    - by Mark Treadwell
    I find Google Chrome, which I have no interest in, is continually getting installed on machines in my house, mostly due to Adobe Shockwave bringing it along as an install package. (Family members are agreeing to the download, not realizing the Chrome is getting dropped as well.) My major issue after uninstalling Chrome is that you can no longer click on links in Outlook emails. There is a lot on the web about this, and Google has not been proactive at fixing their uninstaller. I have now added a registry file to my Win64 systems to reset the problem registry keys and clear the error. This registry file is pretty simple. It merely resets HKEY_CURRENT_USER\Software\Classes\.htm, HKEY_CURRENT_USER\Software\Classes\.html, and HKEY_CURRENT_USER\Software\Classes\.shtml back to their default values of "htmlfile". Chrome takes over the handling of these file extensions because its default install is to make itself the default web browser. The Chrome uninstalled fails to clear/reset them. In troubleshooting this, I looked in my registry based on the web info on the Chrome uninstall problem. Since my system had never had Chrome installed, my registry did not have the problem keys. To troubleshoot, I installed (ugh!) and uninstalled Chrome. Sure enough, Chrome left the expected debris with a value string of "ChromeHTML.PR2EPLWMBQZK3BY7Z2BFBMFERU" or something similar. Resetting these values fixed the problem. I see that Chrome leaves quite a bit of debris behind in the registry. I guess it is creating the keys then leaving them behind, even though their presence (with bad data) subsequently affects operations.

    Read the article

  • How do I stop color changes when quitting vi from a terminal emulator?

    - by Michael Warhol
    I have a problem with colors when using vi under Ubuntu 12.04. I'm connecting to my Ubuntu server from a PC, using PowerTerm terminal emulation software. I have PowerTerm set up to display black text on a grey background. When I connect to the Ubuntu box, the screen is fine. When I open a file with vi, the screen is fine. The text is black on a gray background, which is normal for my PowerTerm setup. However, if the file is less than a full screen long, the remainder of the screen is a black background. When I quit vi, the entire background turns black, and the text becomes white. I have to do a Terminal Reset to restore my normal text and background colors. What I want is for there to be no change at all when I use vi. The text should be black and the background grey. I have another server loaded with RedHat 9, and that acts normally; colors don’t change when using vi. Here is my .vimrc file: set compatible syntax off let g:loaded_matchparen=1 set nocp set noincsearch set nohlsearch set noshowmatch set bg=dark I've tried set bg=dark and set bg=light. It makes no difference. Is there some other set command that would clear this up for me, or some TERM setting (my TERM is set to linux)?

    Read the article

  • Network: Incoming connections work, outgoing fails

    - by anirvan
    i recently set up my own server at home to run Ubuntu 12.04 server ed. on booting up, i noticed that a message related to networking comes up, and the booting process pauses. the message read something like - waiting for network configuration and after a while - waiting another 60 seconds... on booting up, I realised that any command which requires a network connection was not working - ping, apt-get install, etc. on firing the ifup eth0 command, I get the error RTNETLINK answers: File exists. Failed to bring up eth0. I also realised, while searching the web for this problem, that this is probably one of the most common networking related issues - however, most of the questions are around setting up multiple IPs for the same machine. ifdown eth0 also fails, stating that eth0 is not configured. my /etc/network/interfaces file has a simple configuration for a static IP: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address xx.xx.xx.xx netmask xx.xx.xx.xx broadcast xx.xx.xx.xx gateway xx.xx.xx.xx dns-nameservers xx.xx.xx.xx The strangest part of this problem is that, while I can't connect to anything outside, I can ping to this particular server using the static IP configured in the interface file, and, i can even SSH into it! I'm really at ends here with this problem, and any guidance is much appreciated. Thanks!

    Read the article

  • CoffeeScript - inability to support progressive adoption

    - by Renso
    First if, what is CoffeeScript?Web definitionsCoffeeScript is a programming language that compiles statement-by-statement to JavaScript. The language adds syntactic sugar inspired by Ruby and Python to enhance JavaScript's brevity and readability, as well as adding more sophisticated features like array comprehension and pattern matching.The issue with CoffeeScript is that it eliminates any progressive adoption. It is a purist approach, kind of like the Amish, if you're not borne Amish, tough luck. So for folks with thousands of lines of JavaScript code will have a tough time to convert it to CoffeeScript. You can use the js2coffee API to convert the JavaScript file to CoffeeScript but in my experience that had trouble converting the files. It would convert the file to CoffeeScript without any complaints, but then when trying to generate the CoffeeScript file got errors with guess what: INDENTATION!Tried to convince the CoffeeScript community on github but got lots of push-back to progressive adoption with comments like "stupid", "crap", "child's comportment", "it's like Ruby, Python", "legacy code" etc. As a matter of interest one of the first comments were that the code needs to be re-designed before converted to CoffeeScript. Well I rest my case then :-)So far the community on github has been very reluctant to even consider introducing some way to define code-blocks, obviously curly braces is not an option as they use it for json object definitions. They also have no consideration for a progressive adoption where some, if not all, JavaScript syntax will be allowed which means all of us in the real world that have thousands of lines of JavaScript will have a real issue converting it over. Worst, I for one lack the confidence that tools like js2coffee will provide the correct indentation that will determine the flow of control in your code!!! Actually it is hard for me to find enough justification for using spaces or tabs to control the flow of code. It is no wonder that C#, C, C++, Java, all enterprise-scale frameworks still use curly braces. Have never seen an enterprise app built with Ruby or PhP.Let me know what your concerns are with CoffeeScript and how you dealt with large scale JavaScript conversions to CoffeeScript.

    Read the article

  • Virtualization in Ubuntu 11.10

    - by Mascarpone
    Since Ubuntu 11.10 use a new kernel, it's very difficult to have a decent support for virtualization. VirtualBox doesn't support guest additions for ubuntu 11.10, so I can't copy to and from my ubuntu desktop and windows, which I absolutely require, plus FreeBSD seems not to be able to use DHCP without guest additions. Virt-manager instead gives an error on launch: Unable to open a connection to the libvirt management daemon. Libvirt URI is: qemu:///system Verify that: - The 'libvirt-bin' package is installed - The 'libvirtd' daemon has been started - You are member of the 'libvirtd' group unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: Permission denied Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/connection.py", line 1146, in _open_thread self.vmm = self._try_open() File "/usr/share/virt-manager/virtManager/connection.py", line 1130, in _try_open flags) File "/usr/lib/python2.7/dist-packages/libvirt.py", line 102, in openAuth if ret is None:raise libvirtError('virConnectOpenAuth() failed') libvirtError: unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: Permission denied The problem is solved by running virt-manager as root, but I don't like that. How do I change permissions to run Virt-Manager as user? Is there a way to install guest additions on Ubuntu 11.10?

    Read the article

  • Visual Studio 2012 first impressions...no Macros!

    - by bconlon
    Yesterday I installed Microsoft Visual Studio 2012 for the first time (all 8.5GB) and after 20 years of (mostly) happy times using VS they have removed Macros, one of the most handy features.The first thing I wanted to do when I upgraded my VS2010 project was to add a #elseif block to each file. This would usually be simple case of find in files of the previous #elseif and then Ctrl+Shift+R to record a macro which would be: F8 (to select the next file from find list), F3 (to find the correct position in file), Ctrl+V to paste the new code. Then all I would need to do is keep Ctrl+Shift+P (Play Macro) pressed until all the files were processed.But alas Ctrl+Shift+R does nothing! I won’t say that I use Macros every day but it was a very useful feature.To continue my moaning a little more, I also don't like the bland interface. This has been well documented by others, but now I have used it myself, I find it difficult to tell one grey area of screen from another and the lack of colour makes the icons unclear.I also don't see why the menus now need to SHOUT in capital letters?On the plus side, they have now added the ability to see WPF properties in the debugger...a bit of an oversight in Visual Studio 2010. Oh, but you still can't edit and continue on files that contain templated code.Whilst Visual Studio 2012 is not a complete disaster like Windows 8 (why develop a desk top OS to be the same as a Smart device OS), it does not float my boat.Rant over.#

    Read the article

  • Why can't I run any Android NDK commands?

    - by TheBuzzSaw
    I had been running Mint 12 before, and everything was working there. I switched to Ubuntu 12.04, and now I am very frustrated. When I run ndk-build, I get /home/buzz/ndk/prebuilt/linux-x86/bin/make: not found So, I changed to that folder directly. When I type in ./make, I get bash: ./make: No such file or directory Typing ls clearly shows the file where I am! I did some hacking around (pointing to external tools) to get past each error (just to experiment), and I ran into this! /home/buzz/ndk/toolchains/arm-linux-androideabi-4.6/prebuilt/linux-x86/bin/arm-linux-androideabi-gcc: Command not found Why? Why are all these files unable to be found? As I said above, this was all working just fine in another distro. What changed? What's extra frustrating is that if I push TAB to auto-complete, it works. So, the file is clearly there (and clearly marked with execution permissions). So, why can't it be found?

    Read the article

  • Enterprise Instrumentation: The 'sessionName' parameter of value 'TraceSession' is not valid

    - by Michael Freidgeim
    We are still using Enterprise Instrumentation(that was created during .Net 1.1 time)In new Server 2008 environment and IIS 7 we have the following errors:The 'sessionName' parameter of value 'TraceSession' is not valid. A trace session of this name does not exist in the TraceSessions configuration file for Windows Trace Session Manager service. Ensure that a session of this name exists in the TraceSessions configuration file and that the Windows Trace Session Manager service is started.   at Microsoft.EnterpriseInstrumentation.EventSinks.TraceEventSink..ctor(IDictionary parameters, EventSource eventSource)   --- End of inner exception stack trace ---   at System.RuntimeMethodHandle._InvokeConstructor(IRuntimeMethodInfo method, Object[] args, SignatureStruct& signature, RuntimeType declaringType)   at System.Reflection.RuntimeConstructorInfo.Invoke(BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)   at System.RuntimeType.CreateInstanceImpl(BindingFlags bindingAttr, Binder binder, Object[] args, CultureInfo culture, Object[] activationAttributes)   at Microsoft.EnterpriseInstrumentation.EventSinks.EventSink.CreateNewEventSinks(DataRow[] eventSinkRows, EventSource eventSource)I’ve seen the same errors on development Win7 machines when using IIS. It seems not a problem on Cassini.I've checked ,that Windows Trace Session Manager Service has started and The file C:\Program Files (x86)\Microsoft Enterprise Instrumentation\Bin\Trace Service\TraceSessions.config has corresponding entry<?xml version="1.0" encoding="utf-8" ?><configuration >                <defaultParameters minBuffers="4" maxFileSize="10" maxBuffers="25" bufferSize="20" logFileMode="sequential" flushTimer="3" />                <sessionList>                                 <session name="TraceSession" enabled="false" fileName="C:\Program Files (x86)\Microsoft Enterprise Instrumentation\Bin\Trace Service\Logs\TraceLog.log" />                </sessionList></configuration>The errors still continue, but I was able to disable  the parameter in  eventSink configuration   <eventSink name=" traceSink" description=" Outputs events to the Windows Event Trace." type ="Microsoft.EnterpriseInstrumentation.EventSinks.TraceEventSink ">                <!-- MNF disabled parameter to  avoid error "The 'sessionName' parameter of value 'TraceSession' is not valid"                      < parameter name ="sessionName " value ="TraceSession " />                    -->    </ eventSink>Related old post http://bytes.com/topic/net/answers/104761-enterprise-instrumentation-windows-trace-session-managerOne day I wish to replace all EnterpriseInstrumentation calls with NLog.

    Read the article

  • CodePlex Daily Summary for Thursday, August 07, 2014

    CodePlex Daily Summary for Thursday, August 07, 2014Popular ReleasesSharePoint 2013 Lync Presence using jQuery: jQuery Lync Presence: I have fixed the same user multiple times on page issue in this release (Issue # 1506)Dynamics AX IEIDE Project Explorer: IEIDE.1.1.40803.1: Installing the project: 1. Install the ax model file; do this by running the following commands on the machine having the AX Management Tools: 1.a. cmd (depending if you have UAC enabled, you may want to Run as administrator); 1.b. net stop aos60$01 (wait until you have your AOS stopped); 1.c. cd "c:\Program Files\Microsoft Dynamics AX\60\ManagementUtili ties\" 1.d. axutil import /file:c:\IEIDE.1.1.40806.1.axmodel /verbose 1.e. net start aos60$01 1.f. start the client and select Skip 2. Per...JSLint.NET: JSLint.NET 1.6.4: Bugs: #38: MSBuild task support for linked settings file. #40: Explicitly typed imports / exports required in some Visual Studio environments.AD4 Application Designer for flow based .NET applications: AD4.AppDesigner.23.24: AD4.Iteration.23.24(Advanced Rendering Features) DesignAttribute parameter LabelPosition to define position of flow pin label: AD4_SyntaxDefinition extended by parameter LabelPosition FileExtension extended to parse value of parameter LabelPosition AppDesignAttribute extended by LabelPosition RaiseAlarmFlow of AlarmClockSample.10 extended to test the new attribute RenderFlowChartPinCaptions improved to handle LabelPosition parameter ToDo: Some tutorials are unfinished but coming soon...Instant Beautiful Browsing: IBB 14.3 Alpha: An alpha release of IBB. After 3 years of the last release this version is made from scratch, with tons of new features like: Make your own IBB aps. HTML 5. Better UI. Extreme Windows 8 resemblance. Photos. Store. Movement TONS of times smother compared to previous versions. Remember that this is AN ALPHA release, I hope I will have "IBB 14" finished by December. The documentation on how to create a new application for IBB will come next monthjQuery List DragSort: jQuery List DragSort 0.5.2: Fixed scrollContainer removing deprecated use of $.browser so should now work with latest version of jQuery. Added the ability to return false in dragEnd to revert sort order Project changes Added nuget package for dragsort https://www.nuget.org/packages/dragsort Converted repository from SVN to MercurialForms 7 - A lightweight InfoPath alternative for SharePoint: Forms7 0.0.07 Alpha Release: Minor bug fixes for repeating content functionality and modifications so that Forms7 will work with older versions of Internet ExplorerEASTester: EASTester 1.4: This release has several new features: •You can set proxy server settings. This means you can point it to Fiddler and see more details on the requests and responses. The default is 127.0.0.8 and port 8888 – which is what Fiddler’s proxy defaults to. •There is limited ability for the application to lookup helpful information on the first status code in the response – you will see it displayed in the lowest text box of the Conversation window. This will make the Conversation window a lot easi...Lexisnexis directory of corporate affiliates Text Analyzer: Lexisnexis Text Analyzer: This version has functions below.Standards to analyze, columns, keywords editing Import of document Export to CSV and Microsoft Excel fileWix# (WixSharp) - managed interface for WiX: Release 1.0.0.0: Release 1.0.0.0 Custom UI Custom MSI Dialog Custom CLR Dialog External UIRecaptcha for .NET: Recaptcha for .NET v1.6.0: What's New?Bug fixes Optimized codeMath.NET Numerics: Math.NET Numerics v3.2.0: Linear Algebra: Vector.Map2 (map2 in F#), storage-optimized Linear Algebra: fix RemoveColumn/Row early index bound check (was not strict enough) Statistics: Entropy ~Jeff Mastry Interpolation: use Array.BinarySearch instead of local implementation ~Candy Chiu Resources: fix a corrupted exception message string Portable Build: support .Net 4.0 as well by using profile 328 instead of 344. .Net 3.5: F# extensions now support .Net 3.5 as well .Net 3.5: NuGet package now contains pro...Virto Commerce Enterprise Open Source eCommerce Platform (asp.net mvc): Virto Commerce 1.11: Virto Commerce Community Edition version 1.11. To install the SDK package, please refer to SDK getting started documentation To configure source code package, please refer to Source code getting started documentation This release includes many bug fixes and minor improvements. More details about this release can be found on our blog at http://blog.virtocommerce.com.BoxStarter: Boxstarter 2.4.80: Running the Setup.bat file will install Chocolatey if not present and then install the Boxstarter modules.ProjkyAddin ssms addin script shortcut ssmsaddin: projkyaddin source code and msi files.: projkyaddin source code and msi files.Json.NET: Json.NET 6.0 Release 4: New feature - Added Merge to LINQ to JSON New feature - Added JValue.CreateNull and JValue.CreateUndefined New feature - Added Windows Phone 8.1 support to .NET 4.0 portable assembly New feature - Added OverrideCreator to JsonObjectContract New feature - Added support for overriding the creation of interfaces and abstract types New feature - Added support for reading UUID BSON binary values as a Guid New feature - Added MetadataPropertyHandling.Ignore New feature - Improv...VidCoder: 1.5.24 Beta: Added NL-Means denoiser. Updated HandBrake core to SVN 6254. Added extra error handling to DVD player code to avoid a crash when the player was moved.PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.1.5: *Added Send-Keys function to send a sequence of keys to an application window (Thanks to mmashwani) *Added 3 optimization/stability improvements to Execute-Process following MS best practice (Thanks to mmashwani) *Fixed issue where Execute-MSI did not use value from XML file for uninstall but instead ran all uninstalls silently by default *Fixed error on 1641 exit code (should be a success like 3010) *Fixed issue with error handling in Invoke-SCCMTask *Fixed issue with deferral dates where th...SEToolbox: SEToolbox 01.041.012 Release 1: Added voxel material textures to read in with mods. Fixed missing texture replacements for mods. Fixed rounding issue in raytrace code. Fixed repair issue with corrupt checkpoint file. Fixed issue with updated SE binaries 01.041.012 using new container configuration.Magick.NET: Magick.NET 6.8.9.601: Magick.NET linked with ImageMagick 6.8.9.6 Breaking changes: - Changed arguments for the Map method of MagickImage. - QuantizeSettings uses Riemersma by default.New ProjectsADShop: Project ADShop Categoty : Shop / Bussiness Dev : Andy Nguy?n Create : 7/13/2014 Notes : This source is for students . Don't use for marketing online !Aspose Java for Docx4j: Comparison Between Docx4j And AsposeCode Coverage With Visual Studio: This is going to be Code Coverage Plugin for Visual StudioEuler circuit: Initial releaseLexisnexis directory of corporate affiliates Text Analyzer: The program is made for analyzing documents, Lexisnexis directory of corporate affiliates.Manipulator: It's an HTML tool that provides you some functions. Written in javascript.MicroJSON - Lightweight JSON library: MicroJSON is a lightweight JSON library that provides methods to create and read JSON objects and strings. It is useful for .Net Micro Framework.Ollies MSCRM Tools: Useful tools for your working life with MSCRMOnline: This is a demo appPESS2: nothing particularPrivateScriptLanguage: PSL is a simple Script-Language which can be used for custom shell-development or low-level scripting in applications. You can easily integrate it in any app!QuickDAL: QDAL provides a simple and efficient Data Access Layer between business entities and a T-SQL compatible database.Regular delaunay triangulation: Initial releaseSimpleWebService: This application should provide a simple platform to simulate all kinds of Web-API and/or network services, to provide a good testing environment for developersThe Mario Kart 8 App: Well hello there people! Today I bring to you a new working project for Mario Kart 8 called The Mario Kart 8 App!Visual Studio Online REST API: Visual Studio Online Work Item Tracking REST API sample client written in C#.

    Read the article

  • Tried to install some software, it says some packages are damaged, cannot fix them

    - by lempira
    So, I go to the Ubuntu Software Center, as soon as it opens, a window pops up with the following text: "Items cannot be installed or removed until the package catalog is repaired. Do you want to repair it now?" Then I click the "Repair" button, then a new window pops up with the following text: "Package operation failed. The installation or removal of a software package failed." Then I click on the "Details" button, which returns me the following text: installArchives() failed: Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. Preconfiguring packages ... Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. Preconfiguring packages ... Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. Preconfiguring packages ... dpkg: warning: 'ldconfig' not found in PATH or not executable. dpkg: error: 1 expected program not found in PATH or not executable. Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (2) dpkg: warning: 'ldconfig' not found in PATH or not executable. dpkg: error: 1 expected program not found in PATH or not executable. Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. What should I do?

    Read the article

  • Step 2 of instructions is not clear to me

    - by Albert Frye
    I want to make a bootable USB stick. I run the UUI. I see the instructions on this site: http://www.ubuntu.com/download/desktop/create-a-usb-stick-on-windows Step 1 says: Select "Ubuntu Desktop Edition" from the dropdown list Okay, so the actual title of the drop down box is: Select a "Linux Distribution" from the dropdown to put on your USB I am pretty new to computers. 67 years old. Live alone. Bought my first computer 3 months ago. So I will have to assume that when the instructions say "Ubuntu Desktop Edition", that means the same thing as "Linux Distribution". Okay, No big leap there. So far, so good . . . . . . . So I pick the very first selection: Ubuntu 13.10 Desktop i386 I'm not sure why there are so many choices, but I'm guessing I'm pretty safe with the first one. It's for a Toshiba Satellite laptop 64 bit Windows 7. Okay, now for step 2: The instructions say: Click 'Browse' and open the downloaded ISO file. The message in the window just before the "Browse" button says: Browse to your ubuntu-13.10*desktop*i386.iso -- Okay, so where's that file? So I click "Browse" and start looking for that file. It is nowhere to be found. So where the heck is it?

    Read the article

  • PHP-based indexing and search implementation

    - by Chris
    Is there such thing? I designed a while back a rudimentary form based app for my users. We receive from our suppliers hardware manufacturing data in XML files: file name is made of eleven fields separated by tildes, with each field having its own meaning. R&D guys wanted to be able to search each field of the file names so I used regex() with decent results. Problem is that we have now in the upwards of 2.5 million files. And my app can't hack it anymore. I looked at Apache Lucene & Solr. Though it seemed like the best solution to my problem, the fields in the filenames are not peers to the file content. Big no-no with Solr. What is the best way to implement a PHP app with indexing and search capability with such large number of files? Do I have to buy Zend and use Zend_Search? Is it the only way? Thanks for your input.

    Read the article

< Previous Page | 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340  | Next Page >