Search Results

Search found 24301 results on 973 pages for 'execution process mfg'.

Page 608/973 | < Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >

  • Upgrading to 12.10 on an external hard drive

    - by Tom Childers
    I did some googling on this and didn't find anything specific for my situation. I currently have 12.04 installed on an external USB hard drive. It's working great. I want to upgrade it to 12.10. My bandwidth is very limited so I have a friend who will download 12.10 for me and put it on a flash stik. Then I can upgrade without having to do the download myself. Which particular version of the 12.10 download file(s) should I get? Are there alternate 12.10 downloads that have all the packages? How do I set it up so when I upgrade 12.04 I can specify that it look in some local repository for the 12.10 files? Can I just dump the 12.10 files in some local directory? Or do I have do go thru some complex commands to create a local repository? I'm pretty new to Linux so a long process of complex terminal commands will probably be a show stopper for me. Remember that my 12.04 install resides on an external hard drive. And I have a laptop with multiple USB ports. Thanks! Advait

    Read the article

  • Pull Request Conversations, Inline Diff Enhancements

    [Do you tweet? Follow us on Twitter @matthawley and @adacole_msft] We deployed a new version of the CodePlex website today. Pull Request Conversations Previously, the only way for project members and users who submitted pull requests to converse was via e-mail. This complicated the review process and made conversations isolated and difficult to track. For this release, we’ve added functionality that enables you to have those same conversations within the pull request page. When you view a pull request, you’ll now see “Comments” and “Changes” tabs, with current comments displayed. Inline Diff Enhancements We tweaked the inline diff experience to make it easier to traverse diff blocks. When you open up the inline diff experience, you’ll now see up and down arrows. To move between the diff blocks, you can use those arrows or utilize the available keyboard shortcuts. Lastly, we have also brought the inline diff experience to the source control changes page for project and fork changesets. You can see both enhancements live by viewing the associated pull request or changeset changes on WikiPlex. The CodePlex team values your feedback. We are frequently monitoring Twitter, our Discussions, and Issue Tracker. If you have not visited the Issue Tracker recently, please take a few minutes to suggest or vote on a feature you would like to see implemented.

    Read the article

  • How to display password policy information for a user (Ubuntu)?

    - by C.W.Holeman II
    Ubuntu Documentation Ubuntu 9.04 Ubuntu Server Guide Security User Management states that there is a default minimum password length for Ubuntu: By default, Ubuntu requires a minimum password length of 4 characters Is there a command for displaying the current password policies for a user (such as the chage command displays the password expiration information for a specific user)? > sudo chage -l SomeUserName Last password change : May 13, 2010 Password expires : never Password inactive : never Account expires : never Minimum number of days between password change : 0 Maximum number of days between password change : 99999 Number of days of warning before password expires : 7 This is rather than examining various places that control the policy and interpreting them since this process could contain errors. A command that reports the composed policy would be used to check the policy setting steps.

    Read the article

  • Package manager doesn't work anymore

    - by LukaD
    I'm using ubuntu 10.10 and recently my package manager has stopped working because of some problems with dependencies or something. I can't upgrade, install or uninstall anything at all. This is a huge problem. I couldn't find a solution to this with google so I'm asking here for help. This is what apt-get -f install outputs LANG=en_US.UTF-8 sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: firefox-4.0-core Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0B of additional disk space will be used. Setting up openjdk-6-jre-headless (6b20-1.9.5-0ubuntu1) ... update-alternatives: error: alternative path /usr/lib/jvm/java-6-openjdk/jre/bin/java doesn't exist. dpkg: error processing openjdk-6-jre-headless (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: openjdk-6-jre-headless E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Workflow: Deploy Operating Systems

    - by Owen Allen
    The Deploy Operating Systems workflow is a workflow document that we added recently. It shows you how to get operating systems up and running in your environment. It's mostly linear, but it's a bit more complicated than some of the others. It's built around a pair of images. In both images, the left side shows the prerequisites for the whole process. Before you can deploy operating systems, you have to have Ops Center fully installed, with libraries set up and hardware already discovered. Once you've done that preparation, the first image walks you through all of the OS deployment steps. First you discover existing operating systems, then you provision Oracle Solaris 10 or Oracle Solaris 11. If you're not planning on using virtualization, then your deployment is done, and you're directed to the operate workflows. If you are interested in virtualization, though, you go on to the second image: The second image walks you through deploying virtualization, sending you to the Deploying Oracle Solaris 10 Zones, Deploying Oracle Solaris 11 Zones, or Deploying Oracle VM Server for SPARC workflows, depending on what kind of virtualization you're planning on using. Once you've done that, you're ready to go on to the operation workflows.

    Read the article

  • VirtualBox 4.0.10 is now available for download

    - by user12611829
    VirtualBox 4.0.10 has been released and is now available for download. You can get binaries for Windows, OS X (Intel Mac), Linux and Solaris hosts at http://www.virtualbox.org/wiki/Downloads The full changelog can be found here. The high points for the 4.0.10 maintenance release include .... GUI: fixed disappearing settings widgets on KDE hosts (bug #6809) Storage: fixed hang under rare circumstances with flat VMDK images Storage: a saved VM could not be restored under certain circumstances after the host kernel was updated Storage: refuse to create a medium with an invalid variant Snapshots: none of the hard disk attachments must be attached to another VM in normal mode when creating a snapshot USB: fixed occasional VM hangs with SMP guests USB: proper device detection on RHEL/OEL/CentOS 5 guests ACPI: force the ACPI timer to return monotonic values for improve behavior with SMP Linux guests RDP: fixed screen corruption under rare circumstances rdesktop-vrdp: updated to version 1.7.0 OVF: under rare circumstances some data at the end of a VMDK file was not written during export Mac OS X hosts: Lion fixes Mac OS X hosts: GNOME 3 fix Linux hosts: fixed VT-x detection on Linux 3.0 hosts Linux hosts: fixed Python 2.7 bindings in the universal Linux binaries Windows hosts: fixed leak of thread and process handles Windows Additions: fixed bug when determining the extended version of the Guest Additions Solaris Additions: fixed installation to 64-bit Solaris 10u9 guests Linux Additions: RHEL6.1/OL6.1 compile fix Linux Additions: fixed a memory leak during VBoxManage guestcontrol execute Technocrati Tags: Sun Virtualization VirtualBox var sc_project=1193495; var sc_invisible=1; var sc_security="a46f6831";

    Read the article

  • Life Is Full Of Changes (Part 1)

    - by Brian Jackett
    Today will be my last day with Sogeti.  I’ve been with Sogeti USA for just over 4 years.  In that time I’ve gotten to work on some great projects, develop relationships with some brilliant and passionate people, participate in the .Net developer and SharePoint communities, and grow my skills in a number of areas I’m passionate about.     As with all good things they must come to an end though.  I’ve accepted a position with another company and will provide more details once the transition has completed.  This decision was a difficult one to make but it provides a great career opportunity on many levels.  As much as my new schedule allows I plan to continue participating in local user groups, speaking at conferences, and blogging.     Speaking of which, you may have noticed my reduced blogging activity in the past few months.  In addition to a career change I’m also in the process of moving to a new residence (only a few miles from my current residence, so I’ll still be in Columbus.)  Searching for a new place, filling out paperwork, and all of the other work associated with this move has taken away a good chunk of the time I used to devote to blogging.  Once everything gets settled out with the move and job change I’ll re-evaluate how much time I can devote to blogging.     A big thanks to Sogeti and everyone who has been so supportive over my time with them.  It’s hard to move on, but I am excited for the prospects that the future will bring.         -Frog Out

    Read the article

  • Updated Virtual Machine for VS/TFS 2010

    - by Enrique Lima
    If you had downloaded the previous version of the virtual machines, then you are likely aware they are set to expire soon (12/15/2010). Brian Keller announced yesterday (blog post here) the availability of a vm refresh (new expiration set for 6/1/2011). What is part of the refresh? Here is the excerpt from Brian’s post: “ The version of this virtual machine which was refreshed on December 9, 2010, includes the following additions: · Visual Studio 2010 Feature Pack 2 · Team Foundation Server 2010 Power Tools (September 2010 Release) · Visual Studio 2010 Productivity Power Tools (these are disabled in VS so that the screenshots of the hands-on-labs still match; you can quickly enable the Productivity Power Tools via Tools -> Extension Manager from within Visual Studio) · Test Scribe for Microsoft Test Manager · Visual Studio Scrum 1.0 Process Template · All Windows Updates through December 8, 2010 · Lab Management GDR (KB983578) · Visual Studio 2010 Feature Pack 2 pre-requisite hotfix (KB2403277) · Microsoft Test Manager hotfix (KB2387011) · Minor fit-and-finish fixes based on customer feedback · A new expiration date of June 1, 2011” The links to download the Virtual Machines are: Hyper-V: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=e0198b64-4acb-4709-b07f-359fb4d523bc&displaylang=en Windows Virtual PC (Win 7): http://www.microsoft.com/downloads/en/details.aspx?FamilyID=509c3ba1-4efc-42b5-b6d8-0232b2cbb26e&displaylang=en

    Read the article

  • HAProxy being killed with more that 54,000 connections

    - by Olly
    I am trying to run HAProxy (1.4.8) on a EC2 machine running Ubuntu 10.04. I need HAProxy to be able to handle many thousands of long-running persistent connections (websockets). With the current setup HAProxy gets killed at around 54,300 connections (roughly). If I am running HAProxy in the foreground, the only output is "Killed". Am I right in thinking this is the Kernel killing the process? Is this because it is out of resources? Can I increase the resources? The CPU and memory consumption are low with 50,000 connections, so I don't suspect either of these. How can I prevent this from happening?

    Read the article

  • Creating Ideal Customers with Modern Marketing

    - by Richard Lefebvre
    “Without that real-time perspective, it's just not possible to stay in step with what your customers want and need.” — Customer-Obsessed Marketing Is Your Next Competitive Edge Every business talks about focusing on the customer. But few actually deliver. Why? Because digital marketing technology can’t tell a compelling story. It lacks engaging dialogue with no connection beyond the transaction. It’s lost in translation because marketers don’t speak code. And it’s confusing to the customer because marketing and IT can’t connect process and data. Take a look at your digital marketing picture. From a distance it may look fine. But look up close. It’s fragmented and the dots are not connected. You need much higher resolution. Step back and see the big picture. Zoom in on the individual customer. But you’ll need Modern Marketing technology engineered with enterprise grade data management and proven cloud performance. Explore the people, processes, and technology of the Oracle Marketing Cloud. Create a culture of customer obsession. Simplify marketing across all channels to turn casual prospects into passionate advocates. Engage ideal customers with a meaningful experience. Personalize your brand narrative for each customer in every chapter of your story to increase engagement and revenue. Read the full article and watch the videos here

    Read the article

  • Uploading many large files to a remote server

    - by TiernanO
    I am in the process of creating an offsite backup, and need to do a initial load of data. Currently, that's about 400Gb, give or take 10Gb or so... The backup system is producing files which are about 4Gb each, and has some other, smaller related files also. So, i need to transfer all 400ish gigs to a remote server, but how? What is the best method? I have full remote access to the server, so i can install anything i need to install. There are Windows, Linux and a Solaris VM running on the box itself, so any of those can be used there, and i have Windows and Linux at home. I have 2 internet connections in house, 10Mb/s uploading on each, so something that could potentially split the number of connections would be handy (kind of like GetRight, but in reverse... PutRight?).

    Read the article

  • Vnc viewer authentication failure

    - by Twosingleton
    I recently backed up my data and I had moved the vnc viewer executable from my PC to my portable hard disk. Realizing that I no longer had vnc, I got the latest one, but all of a sudden I could not connect to my server anymore and got authentification failure. So I moved the VNC exectuable back from my portable HD to my local HD. And I am still getting Authentification failure errors. I had a certain setup and I don't want to re-create it, do you know how I can recover or what happened to get auth failures all of a sudden ? I checked and the vncserver process is running fine. Old VNC viewer: vnc-4_1_3-x86_win32_viewer.exe New one:

    Read the article

  • ISO 12207 - testing being only validation activity? [closed]

    - by user970696
    Possible Duplicate: How come verification does not include actual testing? ISO norm 12207 states that testing is only validation activity, while all static inspections are verification (that requirement, code.. is complete, correct..). I did found some articles saying its not correct but you know, it is not "official". I would like to understand because there are two different concepts (in books & articles): 1) Verification is all testing except for UAT (because only user can really validate the use). E.g. here OR 2) Verification is everything but testing. All testing is validation. E.g. here Definitions are mostly the same, as Sommerville's: The aim of verification is to check that the software meets its stated functional and non-functional requirements. Validation, however, is a more general process. The aim of validation is to ensure that the software meets the customer’s expectations. It goes beyond simply checking conformance with the specification to demonstrating that the software does what the customer expects it to do It is really bugging me because I tend to agree that functional testing done on a product (SIT) is still verification because I just follow the requirements. But ISO does not agree..

    Read the article

  • How to Zone Forward to a List of Alternative Name Servers in pfSense 2.0.1

    - by Bob B.
    I'm not sure if dnsmasq is involved in this process on pfSense or not. Before pfsense, we'd do this in BIND thusly: zone "firstpartner.com" { type forward; forwarders { 1.2.3.4; 5.6.7.8; w.x.y.z; }; I'm intentionally over-explaining this in the interests of specificity: We currently use dnsmasq to direct local queries for our primarydomain.com. Anything that doesn't match a host override entry in pfSense gets passed off to our external name servers, as defined elsewhere in pfSense. There are certain other zones which are not publicly accessible, let's call them firstpartner.com and secondpartner.com that each have various subdomains that their own name servers handle. I need a way to define a list of name server IPs for each domain zone (see BIND example above). Thanks in advance for any help you can provide.

    Read the article

  • Is there a good way to wrap an existing Python based web application to require a login?

    - by Jonathan B
    I'm in the process of installing an open-source Python based web application to an internal server here at work. The existing code is open - it doesn't require a login to view it - but one of the requirements is that users have to be approved before they can see anything. Is there a good way (using Apache configuration files for example, but any method would be great) to wrap the application so that any access requires a login? I would like to avoid modifying the open-source code (a maintenance nightmare every time a new release comes out). Any thoughts or suggestions?

    Read the article

  • Minimal Linux distribution with sshd and apt

    - by Sergey Mikhanov
    When I signed up for my Debian Linux VPS hosting and first logged on and invoked ps, there was the only user process running: sshd. As I can see, this was minimal Linux with only two things installed and configured: sshd and apt (plus all dependencies, of course). I want to build (or use existing) similar Linux distro, any advice on how to build (or pick) one? Googling "minimum linux", or "linux with sshd only" usually brings up Debian's netinstall, which is not what I want. Thanks in advance.

    Read the article

  • performance monitoring

    - by Sunny
    I want to monitor CPU usage, disk read/write usage for a particular process, say ./myprocess. To monitor CPU top command seems to be a nice option and for read and write iotop seems to be a handy one. For example to monitor read/write for every second i use the command iotop -tbod1 | grep "myprocess". My difficulty is I just want only three variables to store, namely read/sec, write/sec, cpu usage/sec. Could you help me with a script that combines the outputs the above said three variables from top and iotop to be stored into a log file? Thanks!

    Read the article

  • Ubuntu hangs on boot when NFS-mounting entries in /etc/fstab, but they mount cleanly otherwise

    - by lorin
    I'm managing several Ubuntu 9.10 servers that NFS mount several folders (including /home). I'd like these folders to be mounted at boot time. I would like to have several entries in my /etc/fstab to accomplish this, e.g. 192.168.1.100:/home /home nfs rw 0 0 192.168.1.100:/usr/ansys_inc /ansys_inc nfs ro 0 0 Unfortunately, with this configuration, the servers usually (although not always) hang during the bootup sequence when trying to do the NFS mount. if I comment out these fstab entries, reboot the machine, uncomment them and mount them manually using the shell, the folders mount cleanly. I'm not sure how to go about debugging this problem. It seems like it has something to do with the boot sequence, that some relevant process hasn't been started by the time the OS tries to mount the folders.

    Read the article

  • vmware host stuck after adding a virtual drive to client

    - by Saariko
    I use ESXi 5.0 I created a virtual (400GB) drive (located on an iSCSI mapped drive), and tried to add it to a specific client on the host. The task has stopped at 11%. After over an hour, it seems that everything is pretty stuck. Looking at the datastore - it says that 400GB are allocated, but I don't see the new drive with the client. How can I check if the process is still working? or should I restart the host and pray for good?

    Read the article

  • Bad HD video deinterlacing processing

    - by Guy Fawkes
    I have Ubuntu 12.04 32-bit with Unity. My system configuration is: CPU: Core 2 Quad Q6600 (2.4 GHz) RAM: 8192 Mb DDR2 Kingston Video: Palit GeForce GTX 260 216 SP, and my screen resolution is 1680x1050. I also have Window 7 Ulitimate installed, and I can see the same files in Media Player Classic without any horizontal lines. I've installed vdpau driver, NVIDIA drivers 304.51, and MPlayer 2 (within SMPlayer). I've disabled "Sync to VBlank" option in CCSM (because in other way, by default, MPlayer process use about 50-60 percents of my processor load), tried to swich between different deinterlace options in SMPlayer, used "-vc ffh264vdpau,ffmpeg12vdpau" (without quotes) parameters for MPlayer, switched to "Ubuntu 2D", but, finally, have no results. Any suggestions? How must I to set up MPlayer?

    Read the article

  • Change Keybindings (hardware to software)

    - by Daniel
    I ran a search for this, but the answers I saw were referring to something altogether different than what I'm asking for. So let me clarify: I'm not asking how to change key-combo shortcuts. I'm asking--how do you actually change what your computer thinks you did when you press a given key? An example of what I mean (and the reason I'm asking). I'm a Chrome user, and I use Windows alongside Ubuntu. I own a Lenovo Thinkpad T61p--it came with my scholarship package, and I would have shopped for a nice computer if I could have. The T61p has two buttons above the left and right arrow keys that relate to browser commands to go back and forth one page. This is extremely frustrating for me, as I use the arrow keys, and a single accidental keystroke will catch me going back a page, losing temporary data, and yelling at my stupid keyboard. At the same time, I'm the type of person who keeps way too many tabs open. Chrome doesn't let me refigure keyboard shortcuts, and the only way it allows you to switch between tabs are ctrl+tab and ctrl+shift+tab, and ctrl+page up/down. I was using Notepad++, and they had finally found the solution to both problems! The page back and forth keys functioned as tab back and forth keys. I went through quite some effort to learn how to change the keybindings in Windows. The page back and page forward keys are now the page up and page down keys, respectively, and if I hit control, they let me switch tabs easily, and rather pleasantly. And if I hit the keys by accident, no harm, no foul. Alas, I'm in Ubuntu now, and I need to go through the process again. And while I couldn't just find the answer online, like I did for Windows, I know Ubuntu has nice, supportive communities like this one, where, hopefully, somebody can tell me how to do either what I did in Windows, or directly make it so that my computer changes tabs when I hit those buttons (removing the ctrl button from the tab-changing command).

    Read the article

  • "That's cool. But it would be even better if..."

    - by Geertjan
    I recently talked to some NetBeans users who were interested in a demonstration of the features that will be part of NetBeans IDE 7.2. (See the 7.2 New and Noteworthy for the full list.) One of the new features I demonstrated was this one. In an interface declaration, NetBeans IDE 7.2 will provide a hint, as can be seen in the sidebar below: When the lightbulb is clicked, or Alt-Enter is pressed, this will be shown: When the hint is invoked, the user will see this: And then the user will be able to enter the name of a class, and the name of a package, and assuming the defaults above are taken, a class with this content will be generated: package demo; public class WordProcessorImpl implements WordProcessor {     @Override     public String process(String word) {         throw new UnsupportedOperationException("Not supported yet.");     } } When I demonstrated the above, the response from the audience was: "That's cool. But it would be even better if..." it was possible to implement an interface into an existing class. it was possible to select a class and specify the interfaces that it should implement. it was possible, in the context of a NetBeans Platform application, to specify the module where the class should be implemented. So I created some issues: Implement an interface into an existing class http://netbeans.org/bugzilla/show_bug.cgi?id=210804 Select class and specify interfaces to implement http://netbeans.org/bugzilla/show_bug.cgi?id=210805 Allow user to select module for generating implementation http://netbeans.org/bugzilla/show_bug.cgi?id=210807

    Read the article

  • Is it appropriate to run a complex enterprise-system configuration and migration project in a similar way to a Scrum development project?

    - by AndyM
    I'm just starting out on the implementation of a large enterprise-wide system, which has complex requirements and many stakeholders. The company has been through high-level evaluation and tender process and determined to purchase a highly configurable "off-the-shelf" product rather than building an entirely bespoke system. The system will replace several existing systems and will require a significant amount of data migration. I'm thinking that the implementation of this system (which is expected to take over 2 years) could be run in a similar way to a Scrum software development project. With the first sprints targeted at building the minimal possible functionality needed (across all functional areas), and then iteratively deepening the level of functionality according the stakeholder feedback. I think this will de-risk the project and help ensure a balance of stakeholder needs within the available time. The user stories are still the same, it's just that to implement them we have work within the constraints of the pre-purchased system. When it comes to 'building stuff', instead of writing custom code the team will be configuring the off-the-shelf package, writing data conversion scripts and the like (and it should be a lot quicker!). Does this sound like a sensible approach? Does the Agile approach makes sense here?

    Read the article

  • frequent abnormal shutdowns/system crashes

    - by user110353
    It's been almost 5 days since I have installed Ubuntu and almost 6th time that my laptop has been crashed entirely and it shuts down abnormally. Actually, it heats up and I have to wait for 20 odd minutes before I can turn it on again. A message appears that my PC crashed due to overheating which may damage my hard disk. The crashes happened when I tried to open some application that freeze my PC not even giving me enough time to go to system monitor and end process. Sometimes the culprit application which caused crash is Ever-pad, sometime it's team-viewer, sometimes it's some other. This is something very serious. The last crash occurred at 09:14:40. Kindly click here to view system log. I want to stick to Ubuntu and the same laptop as I had serious issues with Windows and I nearly went out to dump my laptop and purchase a more powerful system. Below are my hw/os specs. Kindly advice on how to resolve this issue Ubuntu 12.10 Kernal 3.5.0-18-generic GNOME 3.6.0 Memory 2.0GB Processor: Genuine Intel CPU [email protected] x 2 Available Disk Space: 63.7 GB Thanks in advance

    Read the article

  • is wisdom of what happens 'behind scenes' (in compiler, external DLLs etc.) important?

    - by I_Question_Things_Deeply
    I have been a computer-fanatic for almost a decade now. I've always loved and wondered how computers work, even from the purest, lowest hardware level to the very smallest pixel on the screen, and all the software around that. That seems to be my problem though ... as I try to write code (I'm pretty fluent at C++) I always sit there enormous amounts of time in front of a text-editor wondering how every line, statement, datum, function, etc. will correspond to every Assembly and machine instruction performed to do absolutely everything necessary for the kernel to allocate memory to run my compiled program, and all of the other hardware being used as well. For example ... I would write cout << "Before memory changed" << endl; and run the debugger to get the Assembly for this, and then try and reverse disassemble the Assembly to machine code based on my ISA, and then research every .dll, library file, linked library, linking process, linker source code of the program, the make file, the kernel I'm using's steps of processing this compilation, the hardware's part aside from the processor (e.g. video card, sound card, chipset, cache latency, byte-sized registers, calling convention use, DDR3 RAM and disk drive, filesystem functioning and so many other things). Am I going about programming wrong? I mean I feel I should know everything that goes on underneath English syntax on a computer program. But the problem is that the more I research every little thing the less I actually accomplish at all. I can never finish anything because of this mentality, yet I feel compelled to know everything... what should I do?

    Read the article

< Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >