Search Results

Search found 26517 results on 1061 pages for 'large directory'.

Page 404/1061 | < Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >

  • Licence Error for Matlab 2012a on Ubuntu 12.04 (64-bit)

    - by MalTec
    I have installed Matlab 2012a on ubuntu 12.04, while providing the licence I find the following error: Could not complete Activation because the License File could not be written to disk. You might not have write permission on the License File or the folder. /usr/local/MATLAB/R2012a/licenses/license_Malhar-PC_161052_R2012a.lic See your System Administrator for assistance. The specific error message text is: /usr/local/MATLAB/R2012a/licenses/license_Malhar-PC_161052_R2012a.lic (No such file or directory).

    Read the article

  • SPSS launcher for Unity

    - by user67157
    I have "SPSS 20 for Linux" statistical analysis program installed under /opt/IBM directory. I can run it via terminal entering the command line with /opt/IBM, and it runs without any problems. When the program window opens, I go to the launcher and right-click on its icon to lock it there, but after closing it, the icon becomes useless. When I click on it, nothing happens. I tried to ceate a launcher for it, but there is no use again.

    Read the article

  • Running OpenStack Icehouse with ZFS Storage Appliance

    - by Ronen Kofman
    Couple of months ago Oracle announced the support for OpenStack Cinder plugin with ZFS Storage Appliance (aka ZFSSA).  With our recent release of the Icehouse tech preview I thought it is a good opportunity to demonstrate the ZFSSA plugin working with Icehouse. One thing that helps a lot to get started with ZFSSA is that it has a VirtualBox simulator. This simulator allows users to try out the appliance’s features before getting to a real box. Users can test the functionality and design an environment even before they have a real appliance which makes the deployment process much more efficient. With OpenStack this is especially nice because having a simulator on the other end allows us to test the complete set of the Cinder plugin and check the entire integration on a single server or even a laptop. Let’s see how this works Installing and Configuring the Simulator To get started we first need to download the simulator, the simulator is available here, unzip it and it is ready to be imported to VirtualBox. If you do not already have VirtualBox installed you can download it from here according to your platform of choice. To import the simulator go to VirtualBox console File -> Import Appliance , navigate to the location of the simulator and import the virtual machine. When opening the virtual machine you will need to make the following changes: - Network – by default the network is “Host Only” , the user needs to change that to “Bridged” so the VM can connect to the network and be accessible. - Memory (optional) – the VM comes with a default of 2560MB which may be fine but if you have more memory that could not hurt, in my case I decided to give it 8192 - vCPU (optional) – the default the VM comes with 1 vCPU, I decided to change it to two, you are welcome to do so too. And here is how the VM looks like: Start the VM, when the boot process completes we will need to change the root password and the simulator is running and ready to go. Now that the simulator is up and running we can access simulated appliance using the URL https://<IP or DNS name>:215/, the IP is showing on the virtual machine console. At this stage we will need to configure the appliance, in my case I did not change any of the default (in other words pressed ‘commit’ several times) and the simulated appliance was configured and ready to go. We will need to enable REST access otherwise Cinder will not be able to call the appliance we do that in Configuration->Services and at the end of the page there is ‘REST’ button, enable it. If you are a more advanced user you can set additional features in the appliance but for the purpose of this demo this is sufficient. One final step will be to create a pool, go to Configuration -> Storage and add a pool as shown below the pool is named “default”: The simulator is now running, configured and ready for action. Configuring Cinder Back to OpenStack, I have a multi node deployment which we created according to the “Getting Started with Oracle VM, Oracle Linux and OpenStack” guide using Icehouse tech preview release. Now we need to install and configure the ZFSSA Cinder plugin using the README file. In short the steps are as follows: 1. Copy the file from here to the control node and place them at: /usr/lib/python2.6/site-packages/cinder/volume/drivers/zfssa 2. Configure the plugin, editing /etc/cinder/cinder.conf # Driver to use for volume creation (string value) #volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver volume_driver=cinder.volume.drivers.zfssa.zfssaiscsi.ZFSSAISCSIDriver zfssa_host = <HOST IP> zfssa_auth_user = root zfssa_auth_password = <ROOT PASSWORD> zfssa_pool = default zfssa_target_portal = <HOST IP>:3260 zfssa_project = test zfssa_initiator_group = default zfssa_target_interfaces = e1000g0 3. Restart the cinder-volume service: service openstack-cinder-volume restart 4. Look into the log file, this will tell us if everything works well so far. If you see any errors fix them before continuing. 5. Install iscsi-initiator-utils package, this is important since the plugin uses iscsi commands from this package: yum install -y iscsi-initiator-utils The installation and configuration are very simple, we do not need to have a “project” in the ZFSSA but we do need to define a pool. Creating and Using Volumes in OpenStack We are now ready to work, to get started lets create a volume in OpenStack and see it showing up on the simulator: #  cinder create 2 --display-name my-volume-1 +---------------------+--------------------------------------+ |       Property      |                Value                 | +---------------------+--------------------------------------+ |     attachments     |                  []                  | |  availability_zone  |                 nova                 | |       bootable      |                false                 | |      created_at     |      2014-08-12T04:24:37.806752      | | display_description |                 None                 | |     display_name    |             my-volume-1              | |      encrypted      |                False                 | |          id         | df67c447-9a36-4887-a8ff-74178d5d06ee | |       metadata      |                  {}                  | |         size        |                  2                   | |     snapshot_id     |                 None                 | |     source_volid    |                 None                 | |        status       |               creating               | |     volume_type     |                 None                 | +---------------------+--------------------------------------+ In the simulator: Extending the volume to 5G: # cinder extend df67c447-9a36-4887-a8ff-74178d5d06ee 5 In the simulator: Creating templates using Cinder Volumes By default OpenStack supports ephemeral storage where an image is copied into the run area during instance launch and deleted when the instance is terminated. With Cinder we can create persistent storage and launch instances from a Cinder volume. Booting from volume has several advantages, one of the main advantages of booting from volumes is speed. No matter how large the volume is the launch operation is immediate there is no copying of an image to a run areas, an operation which can take a long time when using ephemeral storage (depending on image size). In this deployment we have a Glance image of Oracle Linux 6.5, I would like to make it into a volume which I can boot from. When creating a volume from an image we actually “download” the image into the volume and making the volume bootable, this process can take some time depending on the image size, during the download we will see the following status: # cinder create --image-id 487a0731-599a-499e-b0e2-5d9b20201f0f --display-name ol65 2 # cinder list +--------------------------------------+-------------+--------------+------+-------------+ |                  ID                  |    Status   | Display Name | Size | Volume Type | … +--------------------------------------+-------------+--------------+------+------------- | df67c447-9a36-4887-a8ff-74178d5d06ee |  available  | my-volume-1  |  5   |     None    | … | f61702b6-4204-4f10-8bdf-7da792f15c28 | downloading |     ol65     |  2   |     None    | … +--------------------------------------+-------------+--------------+------+-------------+ After the download is complete we will see that the volume status changed to “available” and that the bootable state is “true”. We can use this new volume to boot an instance from or we can use it as a template. Cinder can create a volume from another volume and ZFSSA can replicate volumes instantly in the back end. The result is an efficient template model where users can spawn an instance from a “template” instantly even if the template is very large in size. Let’s try replicating the bootable volume with the Oracle Linux 6.5 on it creating additional 3 bootable volumes: # cinder create 2 --source-volid f61702b6-4204-4f10-8bdf-7da792f15c28 --display-name ol65-bootable-1 # cinder create 2 --source-volid f61702b6-4204-4f10-8bdf-7da792f15c28 --display-name ol65-bootable-2 # cinder create 2 --source-volid f61702b6-4204-4f10-8bdf-7da792f15c28 --display-name ol65-bootable-3 # cinder list +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ |                  ID                  |   Status  |   Display Name  | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | 9bfe0deb-b9c7-4d97-8522-1354fc533c26 | available | ol65-bootable-2 |  2   |     None    |   true   |             | | a311a855-6fb8-472d-b091-4d9703ef6b9a | available | ol65-bootable-1 |  2   |     None    |   true   |             | | df67c447-9a36-4887-a8ff-74178d5d06ee | available |   my-volume-1   |  5   |     None    |  false   |             | | e7fbd2eb-e726-452b-9a88-b5eee0736175 | available | ol65-bootable-3 |  2   |     None    |   true   |             | | f61702b6-4204-4f10-8bdf-7da792f15c28 | available |       ol65      |  2   |     None    |   true   |             | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ Note that the creation of those 3 volume was almost immediate, no need to download or copy, ZFSSA takes care of the volume copy for us. Start 3 instances: # nova boot --boot-volume a311a855-6fb8-472d-b091-4d9703ef6b9a --flavor m1.tiny ol65-instance-1 --nic net-id=25b19746-3aea-4236-8193-4c6284e76eca # nova boot --boot-volume 9bfe0deb-b9c7-4d97-8522-1354fc533c26 --flavor m1.tiny ol65-instance-2 --nic net-id=25b19746-3aea-4236-8193-4c6284e76eca # nova boot --boot-volume e7fbd2eb-e726-452b-9a88-b5eee0736175 --flavor m1.tiny ol65-instance-3 --nic net-id=25b19746-3aea-4236-8193-4c6284e76eca Instantly replicating volumes is a very powerful feature, especially for large templates. The ZFSSA Cinder plugin allows us to take advantage of this feature of ZFSSA. By offloading some of the operations to the array OpenStack create a highly efficient environment where persistent volume can be instantly created from a template. That’s all for now, with this environment you can continue to test ZFSSA with OpenStack and when you are ready for the real appliance the operations will look the same. @RonenKofman

    Read the article

  • An Introduction to Information Rights Management in Exchange 2010

    If you’re a Systems Administrator concerned about information security, you could do worse than implementing Microsoft’s Information Rights Management system; especially if you already have Active Directory Rights Management Services in place. Elie Bou Issa talks Hub Servers, Transport Protection Rules and Outlook integration in this excellent guide to getting started with IRM.

    Read the article

  • Virtual Box - How to open a .VDI Virtual Machine

    - by [email protected]
    TUESDAY, APRIL 27, 2010 How to open a .VDI Virtual MachineSometimes someone share with us one Virtual machine with extension .VDI, after that we can wonder how and what with?Well the answer is... It is a VirtualBox - Virtual Machine. If you have not downloaded it you can do this easily just follow this post.http://listeningoracle.blogspot.com/2010/04/que-es-virtualbox.htmlorhttp://oracleoforacle.wordpress.com/2010/04/14/ques-es-virtualbox/Ok, Now with VirtualBox Installed open it and proceed with the following:1. Open the Virtual File Manager.2. Click on Actions ? Add and select the .VDI fileClick "Ok"3. Now we can register the new Virtual Machine - Click New, and Click Next4. Write down a Name for the virtual Machine a proceed to select a Operating System and Version. (In this case it is a Linux (Oracle Enterprise Linux or RedHat)Click Next5. Select the memory amount base for the Virtual Machine(Minimal 1280 for our case) - Click Next6. Select the Disk 11GR2_OEL5_32GB.vdi it was added in the virtual media manager in the step 2.Dont forget let selected Boot hard Disk (Primary Master) . Given it is the only disk assigned to the virtual machine.Click Next7. Click Finish8. This step is important. Once you have click on the settings Button. 9. On General option click the advanced settings. Here you must change the default directory to save your Snapshots; my recommendation set it to the same directory where the .Vdi file is. Otherwise you can have the same Virtual Machine and its snapshots in different paths.10. Now Click on System, and proceed to assign the correct memory (If you did not before)Note: Enable "Enable IO APIC" if you are planning to assign more than one CPU to the Virtual Machine.Define the processors for the Virtual machine. If you processor is dual core choose 211. Select the video memory amount you want to assign to the Virtual Machine12. Associated more storage disk to the Virtual machine, if you have more VDI files.(Not our case)The disk must be selected as IDE Primary Master.13. Well you can verify the other options, but with these changes you will be able to start the VM.Note: Sometime the VM owner may share some instructions, if so follow his instructions.14. Finally Start the Virtual Machine (Click > Start)

    Read the article

  • Customizing Flowcharts in Oracle Tutor

    - by [email protected]
    Today we're going to look at how you can customize the flowcharts within Oracle Tutor procedures, and how you can share those changes with other authors within your company. Here is an image of a flowchart within a Tutor procedure with the default size and color scheme. You may want to change the size of your flowcharts as your end-users might have larger screens or need larger fonts. To change the size and number of columns, navigate to Tutor Author Author Options Flowcharts. The default is to have 4 columns appear in each flowchart, but, if I change it to six, my end-users will see a denser flowchart. This might be too dense for my end-users, so I will change it to 5 columns, and I will also deselect the option to have separate task boxes. Now let's look at how to customize the colors. Within the Flowchart options dialog, there is a button labeled "Colors." This brings up a dialog box of every object on a Tutor flowchart, and I can modify the color of each object, as well as the text within the object. If I click on the background, the "page" object appears in the Item field, and now I can customize the color and the title text by selecting Select Fill Color and/or Select Text Color. A dialog box with color choices appears. If I select Define Custom Colors, I can make my selections even more precise. Each time I change the color of an object, it appears in the selection screen. When the flowchart customization is finished, I can save my changes by naming the scheme. Although the color scheme I have chosen is rather silly looking, perhaps I want others to give me their feedback and make changes as they wish. I can share the color scheme with them by copying the FCP.INI file in the Tutor\Author directory into the same directory on their systems. If the other users have color schemes that they do not want to lose, they can copy the relevant lines from the FCP.INI file into their file. If I flowchart my document with the new scheme, I can see how it looks within the document. Sometimes just one or two changes to the default scheme are enough to customize the flowchart to your company's color palette. I have seen customers who have only changed the Start object to green and the End object to red, and I've seen another customer who changed every object to some variant of black and orange. Experiment! And let us know how you have customized your flowcharts. Mary R. Keane Senior Development Director, Oracle Tutor

    Read the article

  • Kicked back to log in screen immediately after log in

    - by Zachary Alfakir
    I am running Ubuntu 12.04 LTS and whenever I try to log in using my main account, it kicks me back to log in screen. My other accounts don't do this however. I also tried to read .xsessions-errors using cat but it would output gpg-agent[7156]: failed to run the command: No such file or directory I can also log in my main account with tty. How can I fix it so I can log into my main account again?

    Read the article

  • Using tar, the entire folder structure is including, I don't want that

    - by Blankman
    I am taring a folder and for some reason the entire directory structure that preceds the folder I am tarring is included. I am doing this in a script like: 'tar czf ' + dir + '/asdf.tgz ' + dir + 'asdf/' Where dir is like: /Downloads/archive/ In the man pages, I see I can fix this but I can't get it to work. I tried: tar czf -C dir ... But now I have some kind of a file -C in my folder (which I can't seem to delete btw!). Please help!

    Read the article

  • Package Installation Failure

    - by mahima
    Whenever I try to install any new package or upgrade a package, it fails with below mentioned error: Setting up install-info (4.13a.dfsg.1-5ubuntu1) ... /etc/environment: line 1: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games: No such file or directory dpkg: error processing install-info (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: install-info E: Sub-process /usr/bin/dpkg returned an error code (1) I checked all the directories mentioned in PATH variable exists.

    Read the article

  • Library missing for executable file

    - by user1610406
    There's an executable I downloaded onto my Ubuntu 10.04 and I can't run because it's missing a library. I have also tried compiling the source with CMake. This is my Terminal output: zack@zack-laptop:~/Desktop$ ./MultiMC ./MultiMC: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory I think I need libssl 1.0 to run this file, but I'm not sure. Any help?

    Read the article

  • Can't return to stock Android

    - by kurt
    Whenever I try and uncompress the tar file, I get this: tar (child): nakasi-jro03d-factory-e102ba72.tg: Cannot open: No such file or directory tar (child): Error is not recoverable: exiting now tar: Child returned status 2 tar: Error is not recoverable: exiting now I'm fairly new to Ubuntu. So I'm pretty sure that's my fault. So I just compress it manually using the GUI. But when I try to flash it, I get sudo: ./flash-all.sh: command not found Please help.

    Read the article

  • how to install vlc player from tar file

    - by mohit
    After getting vlc-2.0.3tar.gz I extracted it and then tried the ./install, but it gave me an error. Then i did ./configure and got this.. configure: error: No package 'dbus-1' found. Then tried this: apt-get install dbus-1 But I got this: E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable) E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it? What do I do next?

    Read the article

  • Debugging .NET code called from X++ code in AX 2012

    - by ssmantha
    A very intriguing issue came to me to debug .Net code called from X++ code in AX 2012. This was indeed a challenge to be nailed down. Luckily the tools and some concepts helped me to achieve this task. Here it goes... We need to do a seamless debugging from AX debugger to Visual Studio back and forth. To enable this we need to first see if the dll to be debug is present in GAC then we might need to uninstall it from it due to the order of preference .NET loads the assemblies. The assemblies are first loaded from GAC and then the runtime checks for Public and Private Assemblies. Since the assembly in GAC is always compiled with runtime optimizations it is difficult to debug. We need to unhook this assembly from GAC and then move further relying on >NET assembly loading patterns. Step 1: Remove the target assembly to debug from GAC. Before that stop all the AOS servers and close all the instances of programs which rely on AOT e.g. all clients and even visual studio now. Step 2: Build your sample code which is present in AOT in debug mode and get the dll file along with PDB files. Step 3: Place these files in the Server\..\Bin and Client\bin directories of AX installation. Step 4: Configure Visual Studio: Step 4.1: Configure Debugging Options. In Visual Studio Go to Debug -> Options and Settings -> Debug node -> General sub node and disable “Enable Just My Code (managed)” Step 4.2: Specify the symbol loading directory options. Specify the locations for Client bin and server bin directories of the installation, remember to specify the correct instance of Server bin directory corresponding to your AOS. Step 4.3: Configure the project for debugging Step 5: Ready to go place your breakpoints in X++ and in .Net wherever necessary before this process... Run the Visual studio project and it will invoke the AX client with your breakpoint hitting X++ code.. and when you do a step-in using F11 the Visual studio debugger will be active and from here onwards you would be able to debug the complete flow. Debugging in seamless manner across debuggers is really very good feature and mostly underutilized, but by doing so we can have improved troubleshooting and saves a hell lot of time.. Stay tuned for more in Advanced Debugging..

    Read the article

  • How to install JavaFx in Ubuntu 12.04?

    - by Ant's
    I download JavaFx from here. I placed it in my home directory(anto) under the name javafx. Then I did something like this : vi ~/.bashrc and added the following lines: javaFx_home=/anto/javafx/rt/lib/jfxrt.jar export PATH=$PATH:$javaFx_home But after providing the classpath, I tried running : groovy MyProgram (which depends on the JavaFx classpath). But that throws me an error. Where I went wrong?

    Read the article

  • Encrypted home won't mount automatically nor with ecryptfs-mount-private

    - by Patrik Swedman
    Up until recently my encrypted home worked great but after a reboot it didn't mount itself automatically and when I try to mount it manually I get a mount error: patrik@patrik-server:~$ ecryptfs-mount-private Enter your login passphrase: Inserted auth tok with sig [9af248791dd63c29] into the user session keyring mount: Invalid argument patrik@patrik-server:~$ I've also tried with sudo even though that shouldn't be necesary: patrik@patrik-server:/$ sudo ecryptfs-mount-private [sudo] password for patrik: Enter your login passphrase: Inserted auth tok with sig [9af248791dd63c29] into the user session keyring fopen: No such file or directory I'm using Ubuntu 10.04.4 LTS and I access it over SSH with putty.

    Read the article

  • Windows Azure Training Kit October 2012 Release

    - by Clint Edmonson
    The Windows Azure Technical Evangelism team have been busy bees lately and we want to share with you what they’ve been working on. As you know we release the Windows Azure Training Kit on a regular cadence, so I’m pleased to announce the Windows Azure Training Kit October 2012 Release. This update of the training kit includes 47 hands-on labs, 24 demos and 38 presentations designed to help you learn how to build applications that use Windows Azure services, including updated hands-on labs to use the latest version of Visual Studio 2012 and Windows 8, new demos and presentations. Essential Links: Windows Azure Training Kit Download Windows Azure Training Kit Github [Issues] Updated Presentations With Speaker Notes Your voices were heard loud and clear! I am excited to announce Speaker Notes have been added to a the majority of the content we have available. Find the new updated decks which contain speaker notes below: Foundation SQL Federation Virtual Machine Overview Virtual Networks Windows 8 and Windows Azure Web Sites Windows Azure Cloud Services Windows Azure Overview Windows Azure Service Bus Deploying Active Directory Building Apps With IaaS and PaaS Identity and Access Control Linux Virtual Machines Managing Virtual Machines PowerShell Migrating Apps and Workloads Scalable Global and Highly Available Apps Security and Identity SQL Database SQL Database Migration Cloud Service Life Cycle DevCamps Cloud Services iOS, Android and Windows Azure Windows 8 and Windows Azure Web Sites Windows 8 and Windows Azure Mobile Services Added Localized Content Due to the excitement in the community surrounding the mobile services launch, it was apparent that we needed to make localized content available to continue to deliver the exciting message around Windows Azure Mobile Services. Localized content is available in the following languages: French Japanese German Chinese (Taiwan) Spanish Italian Korean Portuguese (Brazilian) Russian Updated Hands-On Labs To support those who have upgraded to Visual Studio 2012 or those trying out the Visual Studio 2012 Express Editions, we have made sure that the content is available and supported (selected labs only) in Visual Studio 2012 Express and up. Visual Studio 2012 Windows Azure Traffic Manager Introduction to Cloud Services Service Bus Messaging Introduction to Access Control Service This adds a significant amount of additional content, so we have revamped the Hands-On Lab Navigation page to include subsections for Visual Studio 2012 Labs, Visual Studio 2010 Labs, Open Source Labs, Scenario Labs, All Labs. Added Demos Demos are available for a number of presentations which are available in Foundation, DevCamp, ITPro Event & Device + Service DevCamps. You can browse through the demos on the respective Demo Navigation page or on Github (links provided in Demo listing below). HelloASP Connecting Cloud Services Service Bus Relay Windows 8 and Mobile Services URL Shortener iOS Client Migrating a Web Farm Deploying Active Directory URL Shortener Service  (PHP) Geo-Location Service (PHP) Geo-Location Android Client Getting Started with VMs Load Balancing Availability Deploying Hybrid Apps Migrate VM AppController Geo-Location iOS Client Scale Up/Down Using CSUpload URL Shortener Android Client Imaging Virtual Machines The Windows Azure Training Kit is open source and available on GitHub, enabling you in the community to Report Issues or Fork and either extend the solution or commit bug fixes back to the Training Kit. You can find out more details about  the training kit from our GitHub Page including guidelines on how to commit back to the project. Stay tuned to my twitter feed for Windows Azure and other Microsoft announcements, updates, and links: @clinted

    Read the article

  • Giving a Zone "More Power"

    - by Brian Leonard
    In addition to the traditional virtualization benefits that Solaris zones offer, applications running in zones are also running in a more secure environment. One way to quantify this is compare the privileges available to the global zone with those of a local zone. For example, there a 82 distinct privileges available to the global zone: bleonard@solaris:~$ ppriv -l | wc -l 82 You can view the descriptions for each of those privileges as follows: bleonard@solaris:~$ ppriv -lv contract_event Allows a process to request critical events without limitation. Allows a process to request reliable delivery of all events on any event queue. contract_identity Allows a process to set the service FMRI value of a process contract template. ... Or for just one or more privileges: bleonard@solaris:~$ ppriv -lv file_dac_read file_dac_write file_dac_read Allows a process to read a file or directory whose permission bits or ACL do not allow the process read permission. file_dac_write Allows a process to write a file or directory whose permission bits or ACL do not allow the process write permission. In order to write files owned by uid 0 in the absence of an effective uid of 0 ALL privileges are required. However, in a non-global zone, only 43 of the 83 privileges are available by default: root@myzone:~# ppriv -l zone | wc -l 43 The missing privileges are: cpc_cpu dtrace_kernel dtrace_proc dtrace_user file_downgrade_sl file_flag_set file_upgrade_sl graphics_access graphics_map net_mac_implicit proc_clock_highres proc_priocntl proc_zone sys_config sys_devices sys_ipc_config sys_linkdir sys_dl_config sys_net_config sys_res_bind sys_res_config sys_smb sys_suser_compat sys_time sys_trans_label virt_manage win_colormap win_config win_dac_read win_dac_write win_devices win_dga win_downgrade_sl win_fontpath win_mac_read win_mac_write win_selection win_upgrade_sl xvm_control However, just like Tim Taylor, it is possible to give your zones more power. For example, a zone by default doesn't have the privileges to support DTrace: root@myzone:~# dtrace -l ID PROVIDER MODULE FUNCTION NAME The DTrace privileges can be added, however, as follows: bleonard@solaris:~$ sudo zonecfg -z myzone Password: zonecfg:myzone> set limitpriv="default,dtrace_proc,dtrace_user" zonecfg:myzone> verify zonecfg:myzone> exit bleonard@solaris:~$ sudo zoneadm -z myzone reboot Now I can run DTrace from within the zone: root@myzone:~# dtrace -l | more ID PROVIDER MODULE FUNCTION NAME 1 dtrace BEGIN 2 dtrace END 3 dtrace ERROR 7115 syscall nosys entry 7116 syscall nosys return ... Note, certain privileges are never allowed to be assigned to a zone. You'll be notified on boot if you attempt to assign a prohibited privilege to a zone: bleonard@solaris:~$ sudo zoneadm -z myzone reboot privilege "dtrace_kernel" is not permitted within the zone's privilege set zoneadm: zone myzone failed to verify Here's a nice listing of all the privileges and their zone status (default, optional, prohibited): Privileges in a Non-Global Zone.

    Read the article

  • how to install a pbc library with fink installation,what is meant by fink installation

    - by user2910238
    jec@jec:~/cpabe/libpbc$ ./configure bash: ./configure: No such file or directory jec@jec:~/cpabe/libpbc$ ls announce COPYING gen include misc release arith debian gmp-5.1.3 INSTALL NEWS setup AUTHORS doc gmp-5.1.3.tar.bz2.sig licence.odt param simple.make benchmark ecc gpl-3.0.odt makedeb.sh pbc test configure.ac example guru Makefile.am README jec@jec:~/cpabe/libpbc$ ./configure.ac bash: ./configure.ac: Permission denied

    Read the article

  • Real World Nuget

    - by JoshReuben
    Why Nuget A higher level of granularity for managing references When you have solutions of many projects that depend on solutions of many projects etc à escape from Solution Hell. Links · Using A GUI (Package Explorer) to build packages - http://docs.nuget.org/docs/creating-packages/using-a-gui-to-build-packages · Creating a Nuspec File - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic2.aspx · consuming a Nuget Package - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic3 · Nuspec reference - http://docs.nuget.org/docs/reference/nuspec-reference · updating packages - http://nuget.codeplex.com/wikipage?title=Updating%20All%20Packages · versioning - http://docs.nuget.org/docs/reference/versioning POC Folder Structure POC Setup Steps · Install package explorer · Source o Create a source solution – configure output directory for projects (Project > Properties > Build > Output Path) · Package o Add assemblies to package from output directory (D&D)- add net folder o File > Export – save .nuspec files and lib contents <?xml version="1.0" encoding="utf-16"?> <package > <metadata> <id>MyPackage</id> <version>1.0.0.3</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <summary /> </metadata> </package> o File > Save – saves .nupkg file · Create Target Solution o In Tools > Options: Configure package source & Add package Select projects: Output from package manager (powershell console) ------- Installing...MyPackage 1.0.0 ------- Added file 'NugetSource.AssemblyA.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyA.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'MyPackage.1.0.0.nupkg' to folder 'MyPackage.1.0.0'. Successfully installed 'MyPackage 1.0.0'. Added reference 'NugetSource.AssemblyA' to project 'AssemblyX' Added reference 'NugetSource.AssemblyB' to project 'AssemblyX' Added file 'packages.config'. Added file 'packages.config' to project 'AssemblyX' Added file 'repositories.config'. Successfully added 'MyPackage 1.0.0' to AssemblyX. ============================== o Packages folder created at solution level o Packages.config file generated in each project: <?xml version="1.0" encoding="utf-8"?> <packages>   <package id="MyPackage" version="1.0.0" targetFramework="net40" /> </packages> A local Packages folder is created for package versions installed: Each folder contains the downloaded .nupkg file and its unpacked contents – eg of dlls that the project references Note: this folder is not checked in UpdatePackages o Configure Package Manager to automatically check for updates o Browse packages - It automatically picked up the updates Update Procedure · Modify source · Change source version in assembly info · Build source · Open last package in package explorer · Increment package version number and re-add assemblies · Save package with new version number and export its definition · In target solution – Tools > Manage Nuget Packages – click on All to trigger refresh , then click on recent packages to see updates · If problematic, delete packages folder Versioning uninstall-package mypackage install-package mypackage –version 1.0.0.3 uninstall-package mypackage install-package mypackage –version 1.0.0.4 Dependencies · <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd"> <metadata> <id>MyDependentPackage</id> <version>1.0.0</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <dependencies> <group targetFramework=".NETFramework4.0"> <dependency id="MyPackage" version="1.0.0.4" /> </group> </dependencies> </metadata> </package> Using NuGet without committing packages to source control http://docs.nuget.org/docs/workflows/using-nuget-without-committing-packages Right click on the Solution node in Solution Explorer and select Enable NuGet Package Restore. — Recall that packages folder is not part of solution If you get downloading package ‘Nuget.build’ failed, config proxy to support certificate for https://nuget.org/api/v2/ & allow unrestricted access to packages.nuget.org To test connectivity: get-package –listavailable To test Nuget Package Restore – delete packages folder and open vs as admin. In nugget msbuild: <Import Project="$(SolutionDir)\.nuget\nuget.targets" /> TFSBuild Integration Modify Nuget.Targets file <RestorePackages Condition="  '$(RestorePackages)' == '' "> True </RestorePackages> … <PackageSource Include="\\IL-CV-004-W7D\Packages" /> Add System Environment variable EnableNuGetPackageRestore=true & restart the “visual studio team foundation build service host” service. Important: Ensure Network Service has access to Packages folder Nugetter TFS Build integration Add Nugetter build process templates to TFS source control For Build Controller - Specify location of custom assemblies Generate .nuspec file from Package Explorer: File > Export Edit the file elements – remove path info from src and target attributes <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd">     <metadata>         <id>Common</id>         <version>1.0.0</version>         <title />         <authors>josh-r</authors>         <owners />         <requireLicenseAcceptance>false</requireLicenseAcceptance>         <description>My package description.</description>         <dependencies>             <group targetFramework=".NETFramework3.5" />         </dependencies>     </metadata>     <files>         <file src="CommonTypes.dll" target="CommonTypes.dll" />         <file src="CommonTypes.pdb" target="CommonTypes.pdb" /> … Add .nuspec file to solution so that it is available for build: Dev\NovaNuget\Common\NuSpec\common.1.0.0.nuspec Add a Build Process Definition based on the Nugetter build process template: Configure the build process – specify: · .sln to build · Base path (output directory) · Nuget.exe file path · .nuspec file path Copy DLLs to a binary folder 1) Set copy local for an assembly reference to false 2)  MSBuild Copy Task – modify .csproj file: http://msdn.microsoft.com/en-us/library/3e54c37h.aspx <ItemGroup>     <MySourceFiles Include="$(MSBuildProjectDirectory)\..\SourceAssemblies\**\*.*" />   </ItemGroup>     <Target Name="BeforeBuild">     <Copy SourceFiles="@(MySourceFiles)" DestinationFolder="bin\debug\SourceAssemblies" />   </Target> 3) Set Probing assembly search path from app.config - http://msdn.microsoft.com/en-us/library/823z9h8w(v=vs.80).aspx -                 <?xml version="1.0" encoding="utf-8" ?> <configuration>   <runtime>     <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">       <probing privatePath="SourceAssemblies"/>     </assemblyBinding>   </runtime> </configuration> Forcing 'copy local = false' The following generic powershell script was added to the packages install.ps1: param($installPath, $toolsPath, $package, $project) if( $project.Object.Project.Name -ne "CopyPackages") { $asms = $package.AssemblyReferences | %{$_.Name} foreach ($reference in $project.Object.References) { if ($asms -contains $reference.Name + ".dll") { $reference.CopyLocal = $false; } } } An empty project named "CopyPackages" was added to the solution - it references all the packages and is the only one set to CopyLocal="true". No MSBuild knowledge required.

    Read the article

  • NFS server generating "invalid extent" on EXT4 system disk?

    - by Stephen Winnall
    I have a server running Xen 4.1 with Oneiric in the dom0 and each of the 4 domUs. The system disks of the domUs are LVM2 volumes built on top of an mdadm RAID1. All the domU system disks are EXT4 and are created using snapshots of the same original template. 3 of them run perfectly, but one (called s-ub-02) keeps on being remounted read-only. A subsequent e2fsck results in a single "invalid extent" diagnosis: e2fsck 1.41.14 (22-Dec-2010) /dev/domu/s-ub-02-root contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Inode 525418 has an invalid extent (logical block 8959, invalid physical block 0, len 0) Clear<y>? yes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/domu/s-ub-02-root: 77757/655360 files (0.3% non-contiguous), 360592/2621440 blocks The console shows typically the following errors for the system disk (xvda2): [101980.903416] EXT4-fs error (device xvda2): ext4_ext_find_extent:732: inode #525418: comm apt-get: bad header/extent: invalid extent entries - magic f30a, entries 12, max 340(340), depth 0(0) [101980.903473] EXT4-fs (xvda2): Remounting filesystem read-only I have created new versions of the system disk. The same thing always happens. This, and the fact that the disk is ultimately on a RAID1, leads me to preclude a hardware disk error. The only obvious distinguishing feature of this domU is the presence of nfs-kernel-server, so I suspect that. Its exports file looks like this: /exports/users 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/media/music 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/media/pictures 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/opt 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/users and /exports/opt are LVM2 volumes from the same volume group as the system disk. /exports/media is an EXT2 volume. (There is an issue where clients see /exports/media/pictures as being a read-only volume, which I mention for completeness.) With the exception of the read-only problem, the NFS server appears to work correctly under light load for several hours before the "invalid extent" problem occurs. There are no helpful entries in /var/log. All of a sudden, no more files are written, so you can see when the disk was remounted read-only, but there is no indication of what the cause might be. Can anyone help me with this problem? Steve

    Read the article

  • Where do I create the file .htaccess, in order to serve my HTML5 cache manifest file correctly?

    - by Forrest
    From a post on http://diveintohtml5.org/offline.html (Wayback Machine Copy) Your cache manifest file can be located anywhere on your web server, but it must be served with the content type text/cache-manifest. If you are running an Apache-based web server, you can probably just put an AddType directive in the .htaccess file at the root of your web directory: AddType text/cache-manifest .manifest Where do I create the file .htaccess? Need some more setup with apachectl ? Thanks very much !

    Read the article

  • Extend depth of .htaccess to all subfolders and their children

    - by JoXll
    I need to be able to use .htaccess in all subfolders for full depth. E.g. I have .htaccess in public_html folder: \public_html\.htaccess How I make it to work for the folder small as well? \public_html\home\images\red\thumbs\small\ It only enforces up to home directory not more. ErrorDocument 403 http://google.com Order Deny,Allow Deny from all Allow from 11.22.33.44 Options +FollowSymlinks RewriteEngine On RewriteCond %{HTTPS} off RewriteCond %{HTTP_HOST} ^www\.(.*)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L]

    Read the article

  • Looking for mass cropping software

    - by Bart van Heukelom
    I'm looking for a tool than runs on Ubuntu that can let me: Open an image in a folder which has thousands Crop and rotate it Save as a copy, automatically named (not manually), with one click. Preferably with something in the name that I can later use to filter these cropped copies in Nautilus (unless it saves in another directory, that'd be even better). Move to next image and repeat Does it exist?

    Read the article

  • Package dependency errors : libc

    - by piyush
    I was trying to install kde-full when the libc had some unmet dependencies error. When saying sudo apt-get install kde-full the terminal has this in the end libc6 : Depends: libc-bin (= 2.15-0ubuntu10) libc6:i386 : Depends: libc-bin:i386 (= 2.15-0ubuntu10) libc6-dev : Depends: libc6 (= 2.15-0ubuntu10.3) but 2.15-0ubuntu10 is to be installed libc6-i386 : Depends: libc6 (= 2.15-0ubuntu10.3) but 2.15-0ubuntu10 is to be installed When running sudo apt-get -f install, this shows up at the end De-configuring libc6:i386 ... A copy of the C library was found in an unexpected directory: '/lib/x86_64-linux-gnu/libc-2.15.so' It is not safe to upgrade the C library in this situation; please remove that copy of the C library or get it out of '/lib/x86_64-linux-gnu' and try again. dpkg: error processing /var/cache/apt/archives/libc6_2.15-0ubuntu10.3_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Preparing to replace libc6:i386 2.15-0ubuntu10 (using .../libc6_2.15-0ubuntu10.3_i386.deb) ... De-configuring libc6 ... A copy of the C library was found in an unexpected directory: '/lib/i386-linux-gnu/ld-2.15.so' It is not safe to upgrade the C library in this situation; please remove that copy of the C library or get it out of '/lib/i386-linux-gnu' and try again. dpkg: error processing /var/cache/apt/archives/libc6_2.15-0ubuntu10.3_i386.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/libc6_2.15-0ubuntu10.3_amd64.deb /var/cache/apt/archives/libc6_2.15-0ubuntu10.3_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Any suggestions how to fix this. I don't desire to have kde-full anymore; only that other installations should work. I've done sudo apt-get update several times, so those suggestions can be kept away UPD : here is output of dpkg configure ~$ sudo dpkg --configure -a dpkg: dependency problems prevent configuration of libc6-dev: libc6-dev depends on libc6 (= 2.15-0ubuntu10.3); however: Version of libc6 on system is 2.15-0ubuntu10. dpkg: error processing libc6-dev (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libc6-i386: libc6-i386 depends on libc6 (= 2.15-0ubuntu10.3); however: Version of libc6 on system is 2.15-0ubuntu10. dpkg: error processing libc6-i386 (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: libc6-dev libc6-i386 ~$

    Read the article

  • Gradle for NetBeans RCP

    - by Geertjan
    Start with the NetBeans Paint Application and do the following to build it via Gradle (i.e., no Gradle/NetBeans plugin is needed for the following steps), assuming you've set up Gradle. Do everything below in the Files or Favorites window, not in the Projects window. In the application directory "Paint Application". Create a file named "settings.gradle", with this content: include 'ColorChooser', 'Paint' Create another file in the same location, named "build.gradle", with this content: subprojects { apply plugin: "announce" apply plugin: "java" sourceSets { main { java { srcDir 'src' } resources { srcDir 'src' } } } } In the module directory "Paint". Create a file named "build.gradle", with this content: dependencies { compile fileTree("$rootDir/build/public-package-jars").matching { include '**/*.jar' } } task show << { configurations.compile.each { dep -> println "$dep ${dep.isFile()}" } } Note: The above is a temporary solution, as you can see, the expectation is that the JARs are in the 'build/public-packages-jars' folder, which assumes an Ant build has been done prior to the Gradle build. Now run 'gradle classes' in the "Paint Application" folder and everything will compile correctly. So, this is how the Paint Application now looks: Preferable to the second 'build.gradle' would be this, which uses the JARs found in the NetBeans Platform... netbeansHome = '/home/geertjan/netbeans-dev-201111110600' dependencies { compile files("$rootDir/ColorChooser/release/modules/ext/ColorChooser.jar") def projectXml = new XmlParser().parse("nbproject/project.xml") projectXml.configuration.data."module-dependencies".dependency."code-name-base".each { if (it.text().equals('org.openide.filesystems')) { def dep = "$netbeansHome/platform/core/"+it.text().replace('.','-')+'.jar' compile files(dep) } else if (it.text().equals('org.openide.util.lookup') || it.text().equals('org.openide.util')) { def dep = "$netbeansHome/platform/lib/"+it.text().replace('.','-')+'.jar' compile files(dep) } else { def dep = "$netbeansHome/platform/modules/"+it.text().replace('.','-')+'.jar' compile files(dep) } } } task show << { configurations.compile.each { dep -> println "$dep ${dep.isFile()}" } } However, when you run 'gradle classes' with the above, you get an error like this: geertjan@geertjan:~/NetBeansProjects/PaintApp1/Paint$ gradle classes :Paint:compileJava [ant:javac] Note: Attempting to workaround javac bug #6512707 [ant:javac] [ant:javac] [ant:javac] An annotation processor threw an uncaught exception. [ant:javac] Consult the following stack trace for details. [ant:javac] java.lang.NullPointerException [ant:javac] at com.sun.tools.javac.util.DefaultFileManager.getFileForOutput(DefaultFileManager.java:1058) No idea why the above happens, still trying to figure it out. Once the above works, we can start figuring out how to use the NetBeans Maven repo instead and then the user of the plugin will be able to select whether to use local JARs or JARs from the NetBeans Maven repo. Many thanks to Hans Dockter who put the above together with me today, via Skype!

    Read the article

< Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >