Search Results

Search found 32130 results on 1286 pages for 'local search'.

Page 344/1286 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • Why would the 'show processlist' command speed up normally slow requests to my remote DB? (connected via VPN)

    - by Hakan B.
    I am running a local Django development server that connects to a remote MySQL server via a VPN (IPSec). Request times are awfully slow and I consistently see timeouts. Attempting to diagnose the problem, I logged in to the remote database and ran: show full processlist Immediately, the local server went from idle to working. The page had not yet completely loaded, but progress had been made (debug logs confirm this). When I ran 'show full processlist' several times more in succession, the request completed quickly. I can currently reproduce this - unless I run 'show full processlist' over and over on the remote server, my local request usually times out. Does anyone have any idea why this would happen? I'm running Django 1.3 and OS X 10.7. Note: I realize this may be entirely not be a question with a clear-cut answer and is probably my fault, but it is odd and reproducable, so I hope someone can at least point me the right direction. Thanks in advance.

    Read the article

  • TFS 2010 Build: Dealing with the API restriction error

    - by Jakob Ehn
    Recently I’ve come across this error a couple of times when running builds that exeucte unit tests using Test containers: API restriction: The assembly 'file:///C:\Builds\<path>\myassembly.dll' has already loaded from a different location. It cannot be loaded from a new location within the same appdomain. Every time I’ve got this error, the project has been a web application, and the path to the assembly points down to the _PublishedWebsites directory that is created beneath the Binaries folder during a team build. The error description really says it all (although slightly cryptic), when using test containers, MSTest needs to load all assemblies and see if they contain any unit tests. During this serach, it finds the ‘myassembly.dll’ in two different locations. First it is found directly beneth the Binaries folder, and then it is alos found beneath the _PublishedWebsites\Project\bin folder. The reason is that the default setting for test containers in a TFS 2010 build definition is **\*test*.dll:   This pattern means that MSTest will search recursively for all assemblies beneath the Binaries folder, and during the search it will find the MyAssembly.dll twice. The solution is simple, set the Test assembly file specification property to *test*.dll instead, this will disable the recursive search:

    Read the article

  • pfsense multi-site VPN VOIP deployment

    - by sysconfig
    have main office pfsense firewall configured like this: local networks WAN - internet LAN - local network VOIP - IP phones need to connect remote offices (multi-users) and single remote users (from home) use IPSEC or OpenVPN to build "permanent" automatically connecting tunnels from remote location to main location. in remote locations, network will look like this: WAN - internet LAN - local network multiple users VOIP - multiple IP phones in order for the IP phones to work they have to be able to "see" the VOIP network and the VOIP server back at the main office for single remote users ( like from home ) the setup will be similar but only one phone and one computer so questions: best way to tie networks together? IPSEC or OpenVPN can this be setup to automatically connect ? any issues/suggestions with that design/topology ? QoS or issues with running the VOIP traffic over a VPN throughput, quality etc.. obviously depends on remote locations connection to some degree

    Read the article

  • Complex knowledge management system with CRM..written internally

    - by JonH
    We've all heard of salesforce and sugarcrm and the likes of systems like this. Unfortunately at my workplace we have been asked to write a similiar system (rather then license or purchase). Basically the database is fairly large. Think of modules such as: Corporate groups, customers, programs, projects, sub projects, and issue management. In simple terms a corporate group has one to many customers. A program has one or more projects. A project has one or more sub projects. And an issue can be created on many sub projects. Of course the system is a bit more complex but instead of listing every single module I think its best to keep it simple. In any event, the system in its current state has only two resources to be working on it (basically we have to do it all: CSS, database, jquery, asp.net and C#). We've started off well by defining the UI master and footer pages that way we can reuse those across all of our pages. Now comes the hard part. The system will have about 4k end users with say 5-10% being concurrent users. We are wondering if it makes sense to cache our database data (For say 5-10 minutes) rather then continously hit our database. The reason being is some of these pages may have 5-10 search filters associated with the page. Imagine every time a selection is made from a search box how many database hits. Also some of these search fields cascade so selecting for instance an initial drop down may cascade several drop down boxes under them. Is it wrong to cache because I am not finding too many articles on whether it is a good idea or not. Remember the system is similiar to say a CRM system where we manage our various customers, projects, sub projects, issues, etc.

    Read the article

  • Windows 8 Login Password Out of Sync with Windows Live ID

    - by Israel Lopez
    I'm working with a computer that has setup a local account connected under Windows Live ID. The user can login to Live ID (like hotmail) from another computer with the correct credentials. However from the Windows 8 computer using the correct password it indicates. That password is incorrect. Make sure you're using the password for you Mircrosoft Account. You can always reset it at account.live.com/password/reset. Now, I've used NTPASSWD to reset the password, but it seems that since its not a "Local Account" it wont take the new password or blank one. This account also has a "PIN" the user who also has forgotten it. I also tried to enable/password set the local Administrator account but it does not show up for login. Any ideas?

    Read the article

  • Java process failure (hadoop, hbase)

    - by Vladimir
    Anytime I am running hadoop/hbase process from a command prompt I get an error: /usr/local/hadoop/bin/hadoop: line 320: /usr/lib/jvm/jdk1.7.0/bin/java: cannot execute binary file /usr/local/hadoop/bin/hadoop: line 390: /usr/lib/jvm/jdk1.7.0/bin/java: cannot execute binary file /usr/local/hadoop/bin/hadoop: line 390: /usr/lib/jvm/jdk1.7.0/bin/java: Success I get the same kind of error when I start hbase. java version "1.7.0_07" Java(TM) SE Runtime Environment (build 1.7.0_07-b10) Java HotSpot(TM) Server VM (build 23.3-b01, mixed mode) Could you please tell me what could cause the issue? Thank you

    Read the article

  • Overriding Debian default groups from LDAP

    - by Ex-Parrot
    This is a thing that has always bothered me: how am I best to handle Debian standard groups for LDAP users? Debian has a number of groups defined by default, e.g. plugdev, audio, cdrom and so on. These control access in standard Debian installs. When I want a user from LDAP to be a member of the `audio' group on all machines they log in to, I've tried a few different things: Adding them to the local group on the machine (this works but is hard to maintain) Creating a group in LDAP with the same name and a different GID then adding the user to that group (breaks reverse / forward GID mapping, doesn't seem to work) Creating a group in LDAP with the same name and same GID and adding the user to that group (doesn't seem to work at all, things don't see the LDAP group members) Creating a group in LDAP with the same name and same GID then removing the local group (this works but upsets Debian's maintenance scripts during upgrades that check for local system sanity) What's the best practice for this scenario?

    Read the article

  • PHP shared extensions on Linux

    - by F21
    I am running Ubuntu Server 12.04 and prefer to compile PHP myself as opposed to installing it using apt-get. PHP is running as PHP-FPM. When compiling extensions, I can set it to be compiled as a shared extension using something like --with-bcmath=shared and so on. Are there any benefits to compiling the extensions as shared? I also noticed that the extensions are compiled into a pretty convoluted folder. On my system (my php prefix is /usr/local/php-5.4.9) the extensions end up in /usr/local/php-5.4.9/lib/php/extensions/no-debug-non-zts-20100525. Is there a global way to set a folder so that all shared extensions will be compiled in there? I understand that I can do something like --with-foobar=shared,/usr/local/foobar/ but having to set the extension folder for each shared extension is inefficient and error-prone.

    Read the article

  • How to make the start menu finds a program based on a custom keyword?

    - by Pierre-Alain Vigeant
    I am searching for a way to type a keyword in the start menu Search programs and files field and that it will return the application that match the keyword. An example will better explain this: Suppose that I want to start the powershell. Currently what I can type in the search field is power and the first item that appear is the 64bits powershell shortcut. Now suppose that I'd like ps to return powershell as the first item of the search list. Currently, typing ps return all files with the .ps extension, alongs with a control panel options about recording steps but not the powershell executable itself. How can I do that?

    Read the article

  • CentOS - PHP - Yum Install with Custom ./configure params

    - by Mike Purcell
    I have successfully configured and compiled php on my dev server, and works great, but after talking to a sysadmin buddy, he informed that custom compiles of the latest builds are not recommended for production (or even development) systems. He noted a situation where they custom configured and compiled PHP 5.3.6, only to find that there was some issue with a low-level Postgres driver, so they had to revert back to 5.3.3. So I am considering going back to yum to install PHP, however I have several custom configuration settings and was wondering if it's possible to pass or configure how PHP will be compiled through YUM? My current configure line: Configure Command => './configure' '--with-libdir=lib64' '--prefix=/usr/local/_custom/app/php' '--with-config-file-path=/usr/local/_custom/app/php/etc' '--with-config-file-scan-dir=/usr/local/_custom/app/php/etc/modules' '--disable-all' '--with-apxs2=/usr/sbin/apxs' '--with-curl=/usr/sbin/curl' '--with-gd' '--with-iconv' '--with-jpeg-dir=/usr/lib' '--with-mcrypt=/usr/bin' '--with-pcre-regex' '--with-pdo-mysql=mysqlnd' '--with-png-dir=/usr/lib' '--with-zlib' '--enable-ctype' '--enable-dom' '--enable-hash' '--enable-json' '--enable-libxml' '--enable-mbstring' '--enable-mbregex' '--enable-pdo' '--enable-session' '--enable-simplexml' '--enable-xml' '--enable-xmlreader' '--enable-xmlwriter'

    Read the article

  • synergy-plug is there some kind of log or error control?

    - by ufk
    Hiya. I configured synergy-plus server and client as described in the following url: http://www.engadget.com/2005/08/09/how-to-share-your-keyboard-and-mouse-in-realtime-with-synergy/ I didn't find any info indicating that there is a log file or any error control to see if the client connect correctly. how can i troubleshoot ? Trying to connect between Linux OS and Snow Leopard. this is my server configuration file: (music snow is the snow leopard, tux-in is the linux) section: screens tux-in.local: music-snow: end section: links tux-in.local: right = music-snow music-snow: left = tux-in.local end

    Read the article

  • In Windows 7 Home Premium, is it possible to grant a user account the "log on as a service" right and if so, how?

    - by Ryan Johnson
    The title says it all. I need to have the ability for a local user account to log on as a service on a computer running Windows 7 Home Premium. In Windows 7 Ultimate, this is accomplished by going to Control Panel - Administrative Tools - Local Security Policy and adding the user to the "Log on as a service" policy. In Home Premium, there is no Local Security Policy in the Control Panel. Is there another way to add the use to that policy (i.e. registry setting) or is my only recourse to upgrade the computer to Windows 7 Professional? Thanks in Advance, Ryan

    Read the article

  • PFSense CSR Generation

    - by ErnieTheGeek
    I'm trying to figure out how to generate a CSR so I can generate and install a SSL cert. Here's a LINK to what I've what tried. Granted that post was for m0n0wall, but I figured openssl is openssl. Heres where I get stuck. When I run this: /usr/bin/openssl req -new -key mykey.key -out mycsr.csr -config /usr/local/ssl/openssl.cnf I get this: error on line -1 of /usr/local/ssl/openssl.cnf 54934:error:02001002:system library:fopen:No such file or directory:/usr/src/secure/lib/libcrypto/../../../crypto/openssl /crypto/bio/bss_file.c:122:fopen('/usr/local/ssl/openssl.cnf','rb') 54934:error:2006D080:BIO routines:BIO_new_file:no such file:/usr/src/secure/lib/libcrypto/../../../crypto/openssl/crypto/ bio/bss_file.c:125: 54934:error:0E078072:configuration file routines:DEF_LOAD:no such file:/usr/src/secure/lib/libcrypto/../../../crypto/open ssl/crypto/conf/conf_def.c:197:

    Read the article

  • dansguardian error: filterports must match number of filterips (pfsense)

    - by Bulki
    Hi I'm setting up pfsense with squid3 and dansguardian packages. When I try to start the dansguardian service however, I get the following errors: May 27 22:17:37 php: /pkg_edit.php: The command '/usr/local/etc/rc.d/dansguardian.sh start' returned exit code '1', the output was 'kern.ipc.somaxconn: 16384 -> 16384 kern.maxfiles: 131072 -> 131072 kern.maxfilesperproc: 104856 -> 104856 kern.threads.max_threads_per_proc: 4096 -> 4096 Starting dansguardian. filterports (2) must match number of filterips (1) Error parsing the dansguardian.conf file or other DansGuardian configuration files /usr/local/etc/rc.d/dansguardian.sh: WARNING: failed to start dansguardian' May 27 22:17:37 root: /usr/local/etc/rc.d/dansguardian.sh: WARNING: failed to start dansguardian May 27 22:17:37 dansguardian[52944]: Error parsing the dansguardian.conf file or other DansGuardian configuration files May 27 22:17:37 dansguardian[52944]: filterports must match number of filterips What does "filterports must match number of filterips" mean? Any thoughts on the matter?

    Read the article

  • Set Windows 7 Default Login to a Non Domain Account

    - by Joe Taylor
    We have 12 Laptop Pc's that we have upgraded from Windows XP to Windows 7. The laptops are used by staff on away days. They log on to a local account on the machine - say User1 with no password. On the Windows XP Login screen there was a drop down menu allowing them to log on to the Local Machine. However in Windows7 there is no such box and it is confusing staff. Windows 7 tries to log into the domain by default, it doesn't seem to remember where the user last logged into. Is there a way to set Windows7 to log on to the local machine by default instead of the domain? I do not want the staff to have to type for example stafflaptop1\User1 when they log on.

    Read the article

  • Set Windows 7 Default Login to a Non Domain Account

    - by Joe Taylor
    We have 12 Laptop Pc's that we have upgraded from Windows XP to Windows 7. The laptops are used by staff on away days. They log on to a local account on the machine - say User1 with no password. On the Windows XP Login screen there was a drop down menu allowing them to log on to the Local Machine. However in Windows7 there is no such box and it is confusing staff. Windows 7 tries to log into the domain by default, it doesn't seem to remember where the user last logged into. Is there a way to set Windows7 to log on to the local machine by default instead of the domain? I do not want the staff to have to type for example stafflaptop1\User1 when they log on.

    Read the article

  • Oracle Solaris 11 ZFS Lab for Openworld 2012

    - by user12626122
    Preface This is the content from the Oracle Openworld 2012 ZFS lab. It was well attended - the feedback was that it was a little short - thats probably because in writing it I bacame very time-concious after the ASM/ACFS on Solaris extravaganza I ran last year which was almost too long for mortal man to finish in the 1 hour session. Enjoy. Table of Contents Exercise Z.1: ZFS Pools Exercise Z.2: ZFS File Systems Exercise Z.3: ZFS Compression Exercise Z.4: ZFS Deduplication Exercise Z.5: ZFS Encryption Exercise Z.6: Solaris 11 Shadow Migration Introduction This set of exercises is designed to briefly demonstrate new features in Solaris 11 ZFS file system: Deduplication, Encryption and Shadow Migration. Also included is the creation of zpools and zfs file systems - the basic building blocks of the technology, and also Compression which is the compliment of Deduplication. The exercises are just introductions - you are referred to the ZFS Adminstration Manual for further information. From Solaris 11 onward the online manual pages consist of zpool(1M) and zfs(1M) with further feature-specific information in zfs_allow(1M), zfs_encrypt(1M) and zfs_share(1M). The lab is easily carried out in a VirtualBox running Solaris 11 with 6 virtual 3 Gb disks to play with. Exercise Z.1: ZFS Pools Task: You have several disks to use for your new file system. Create a new zpool and a file system within it. Lab: You will check the status of existing zpools, create your own pool and expand it. Your Solaris 11 installation already has a root ZFS pool. It contains the root file system. Check this: root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 15.9G 6.62G 9.25G 41% 1.00x ONLINE - root@solaris:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 errors: No known data errors Note the disk device the root pool is on - c3t0d0s0 Now you will create your own ZFS pool. First you will check what disks are available: root@solaris:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0 1. c3t2d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@2,0 2. c3t3d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@3,0 3. c3t4d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@4,0 4. c3t5d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@5,0 5. c3t6d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0 6. c3t7d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0 Specify disk (enter its number): Specify disk (enter its number): The root disk is numbered 0. The others are free for use. Try creating a simple pool and observe the error message: root@solaris:~# zpool create mypool c3t2d0 c3t3d0 'mypool' successfully created, but with no redundancy; failure of one device will cause loss of the pool So destroy that pool and create a mirrored pool instead: root@solaris:~# zpool destroy mypool root@solaris:~# zpool create mypool mirror c3t2d0 c3t3d0 root@solaris:~# zpool status mypool pool: mypool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors Back to topExercise Z.2: ZFS File Systems Task: You have to create file systems for later exercises. You can see that when a pool is created, a file system of the same name is created: root@solaris:~# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 86.5K 2.94G 31K /mypool Create your filesystems and mountpoints as follows: root@solaris:~# zfs create -o mountpoint=/data1 mypool/mydata1 The -o option sets the mount point and automatically creates the necessary directory. root@solaris:~# zfs list mypool/mydata1 NAME USED AVAIL REFER MOUNTPOINT mypool/mydata1 31K 2.94G 31K /data1 Back to top Exercise Z.3: ZFS Compression Task:Try out different forms of compression available in ZFS Lab:Create 2nd filesystem with compression, fill both file systems with the same data, observe results You can see from the zfs(1) manual page that there are several types of compression available to you, set with the property=value syntax: compression=on | off | lzjb | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). Create a second filesystem with compression turned on. Note how you set and get your values separately: root@solaris:~# zfs create -o mountpoint=/data2 mypool/mydata2 root@solaris:~# zfs set compression=gzip-9 mypool/mydata2 root@solaris:~# zfs get compression mypool/mydata1 NAME PROPERTY VALUE SOURCE mypool/mydata1 compression off default root@solaris:~# zfs get compression mypool/mydata2 NAME PROPERTY VALUE SOURCE mypool/mydata2 compression gzip-9 local Now you can copy the contents of /usr/lib into both your normal and compressing filesystem and observe the results. Don't forget the dot or period (".") in the find(1) command below: root@solaris:~# cd /usr/lib root@solaris:/usr/lib# find . -print | cpio -pdv /data1 root@solaris:/usr/lib# find . -print | cpio -pdv /data2 The copy into the compressing file system takes longer - as it has to perform the compression but the results show the effect: root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.35G 1.59G 31K /mypool mypool/mydata1 1.01G 1.59G 1.01G /data1 mypool/mydata2 341M 1.59G 341M /data2 Note that the available space in the pool is shared amongst the file systems. This behavior can be modified using quotas and reservations which are not covered in this lab but are covered extensively in the ZFS Administrators Guide. Back to top Exercise Z.4: ZFS Deduplication The deduplication property is used to remove redundant data from a ZFS file system. With the property enabled duplicate data blocks are removed synchronously. The result is that only unique data is stored and common componenents are shared. Task:See how to implement deduplication and its effects Lab: You will create a ZFS file system with deduplication turned on and see if it reduces the amount of physical storage needed when we again fill it with a copy of /usr/lib. root@solaris:/usr/lib# zfs destroy mypool/mydata2 root@solaris:/usr/lib# zfs set dedup=on mypool/mydata1 root@solaris:/usr/lib# rm -rf /data1/* root@solaris:/usr/lib# mkdir /data1/2nd-copy root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02M 2.94G 31K /mypool mypool/mydata1 43K 2.94G 43K /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1 2142768 blocks root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02G 1.99G 31K /mypool mypool/mydata1 1.01G 1.99G 1.01G /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1/2nd-copy 2142768 blocks root@solaris:/usr/lib#zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.99G 1.96G 31K /mypool mypool/mydata1 1.98G 1.96G 1.98G /data1 You could go on creating copies for quite a while...but you get the idea. Note that deduplication and compression can be combined: the compression acts on metadata. Deduplication works across file systems in a pool and there is a zpool-wide property dedupratio: root@solaris:/usr/lib# zpool get dedupratio mypool NAME PROPERTY VALUE SOURCE mypool dedupratio 4.30x - Deduplication can also be checked using "zpool list": root@solaris:/usr/lib# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT mypool 2.98G 1001M 2.01G 32% 4.30x ONLINE - rpool 15.9G 6.66G 9.21G 41% 1.00x ONLINE - Before moving on to the next topic, destroy that dataset and free up some space: root@solaris:~# zfs destroy mypool/mydata1 Back to top Exercise Z.5: ZFS Encryption Task: Encrypt sensitive data. Lab: Explore basic ZFS encryption. This lab only covers the basics of ZFS Encryption. In particular it does not cover various aspects of key management. Please see the ZFS Adminastrion Manual and the zfs_encrypt(1M) manual page for more detail on this functionality. Back to top root@solaris:~# zfs create -o encryption=on mypool/data2 Enter passphrase for 'mypool/data2': ******** Enter again: ******** root@solaris:~# Creation of a descendent dataset shows that encryption is inherited from the parent: root@solaris:~# zfs create mypool/data2/data3 root@solaris:~# zfs get -r encryption,keysource,keystatus,checksum mypool/data2 NAME PROPERTY VALUE SOURCE mypool/data2 encryption on local mypool/data2 keysource passphrase,prompt local mypool/data2 keystatus available - mypool/data2 checksum sha256-mac local mypool/data2/data3 encryption on inherited from mypool/data2 mypool/data2/data3 keysource passphrase,prompt inherited from mypool/data2 mypool/data2/data3 keystatus available - mypool/data2/data3 checksum sha256-mac inherited from mypool/data2 You will find the online manual page zfs_encrypt(1M) contains examples. In particular, if time permits during this lab session you may wish to explore the changing of a key using "zfs key -c mypool/data2". Exercise Z.6: Shadow Migration Shadow Migration allows you to migrate data from an old file system to a new file system while simultaneously allowing access and modification to the new file system during the process. You can use Shadow Migration to migrate a local or remote UFS or ZFS file system to a local file system. Task: You wish to migrate data from one file system (UFS, ZFS, VxFS) to ZFS while mainaining access to it. Lab: Create the infrastructure for shadow migration and transfer one file system into another. First create the file system you want to migrate root@solaris:~# zpool create oldstuff c3t4d0 root@solaris:~# zfs create oldstuff/forgotten Then populate it with some files: root@solaris:~# cd /var/adm root@solaris:/var/adm# find . -print | cpio -pdv /oldstuff/forgotten You need the shadow-migration package installed: root@solaris:~# pkg install shadow-migration Packages to install: 1 Create boot environment: No Create backup boot environment: No Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 14/14 0.2/0.2 PHASE ACTIONS Install Phase 39/39 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 You then enable the shadowd service: root@solaris:~# svcadm enable shadowd root@solaris:~# svcs shadowd STATE STIME FMRI online 7:16:09 svc:/system/filesystem/shadowd:default Set the filesystem to be migrated to read-only root@solaris:~# zfs set readonly=on oldstuff/forgotten Create a new zfs file system with the shadow property set to the file system to be migrated: root@solaris:~# zfs create -o shadow=file:///oldstuff/forgotten mypool/remembered Use the shadowstat(1M) command to see the progress of the migration: root@solaris:~# shadowstat EST BYTES BYTES ELAPSED DATASET XFRD LEFT ERRORS TIME mypool/remembered 92.5M - - 00:00:59 mypool/remembered 99.1M 302M - 00:01:09 mypool/remembered 109M 260M - 00:01:19 mypool/remembered 133M 304M - 00:01:29 mypool/remembered 149M 339M - 00:01:39 mypool/remembered 156M 86.4M - 00:01:49 mypool/remembered 156M 8E 29 (completed) Note that if you had created /mypool/remembered as encrypted, this would be the preferred method of encrypting existing data. Similarly for compressing or deduplicating existing data. The procedure for migrating a file system over NFS is similar - see the ZFS Administration manual. That concludes this lab session.

    Read the article

  • View Word Definitions in IE 8 with the Define with Bing Accelerator

    - by Asian Angel
    Do you need an easy way to view word definitions while browsing with Internet Explorer? The Define with Bing Accelerator will display definitions in the same (or a new) tab and save you time while browsing. Using Define with Bing The installation consists of two steps. First, click on Add to Internet Explorer to start the process. Next you will be asked to confirm the installation. Once you have clicked Add your new accelerator is ready to use (no browser restart required). Whenever you encounter a word that needs defining highlight it, click on the small blue square, go to All Accelerators, and then Define with Bing. There are two ways to access the definition: Hover your mouse over the Define with Bing text to open a small popup window Click on Define with Bing to open a definition search in a new tab Being able to access a definition or explanation in the same tab will definitely save you time while browsing. In the example shown here you can get an idea of what SCORM means but clicking on the links inside the popup window is not recommended (webpage opens in popup and is not resizable). In the situation shown above it is better to click on Define with Bing and see more information in a new tab. Conclusion The Define with Bing Accelerator can be a very useful time saver while browsing with Internet Explorer. Finding those word definitions will be a much more pleasant experience now. Add the Define with Bing Accelerator to Internet Explorer Similar Articles Productive Geek Tips Add Google Dictionary Power to ChromeChoose Custom New Tab Pages in ChromeSearch Alternative Search Engines from within Bing’s Search PageView Word Definitions in Google Chrome with DictionaryTipThe New Bing Bar Provides Easy Access to Searches and Microsoft Live Services TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010 Daily Motivator (Firefox)

    Read the article

  • How to set up server/domain name correctly in hosts file with HTTPS

    - by Byakugan
    I am trying to do local network and I am using these kind of types of network. 1) Main server which connects to internet with static IP 2) Second computer connected to first one locally with address like 192.168.0.2 - when I write this address to address line it is like i wrote localhost in original main server - so it should show my local web browser etc ... It has domain name this IP and connected router for it ... example www.domain.com so I added to my main server hosts file (linux powered) lines like these: 192.168.0.2 domain.com www.domain.com It was working ok when I entered my domain name in local computer it showed my site ... But after some time I added HTTPS cerfiticate and added this line to my apatche server: Redirect permanent / https://www.domain.com/ And now it does not work even when i add something like this to my hosts file: 192.168.0.2 https://www.domain.com So any idea how do do this thing work? Thank you.

    Read the article

  • How can a single script attached to multiple enemies provide separate behavior for each enemy?

    - by Syed
    I am making a TD game and now stucked at multiple enemies using same script. All is well with scripts attached with one enemy only. Problem is in running multiple enemies. Here is the overview. I have a Enemy object with which I have attached a 'RunEnemy' script. Flow of this script is: RunEnemy.cs: PathF p; p = gameObject.GetComponent<PathF>(); //PathF is a pathfinding algo which has 'search; function which returns a array of path from starting position PathList = p.search(starting position); //------------------------------- if(PathList != null) { //if a way found! if(moving forward) transform.Translate(someXvalue,0,0); //translates on every frame until next grid point else if(moving back) transform.Translate(0,someXvalue,0); ...and so on.. if(reached on next point) PathList = p.search(from this point) //call again from next grid point so that if user placed a tower enemy will run again on the returned path } Now I have attached this script and "PathF.cs" to a single enemy which works perfect. I have then made one more enemy object and attached both of these script to it as well, which is not working they both are overlapping movements. I can't understand why, I have attached these scripts on two different gameobjects but still their values change when either enemy changes its value. I don't want to go with a separate script for each enemy because there would be 30 enemies in a scene. How can I fix this problem?

    Read the article

  • Missing Items from Start Menu

    - by ChrisHDog
    I used to be able to search/open items from my start menu, but recently items have gone "missing". Example: I used to be able to hit the start menu, type "iis" and the top item would be iis manager and I could open and run ... now I get a list of items that are not IIS The same is true with typing "servi" - previously i would get Services (i.e. open local services), now it isn't showing I've checked the properties on Customise Start Menu for "Search other files and libraries" and it is selected as "Search without public folders" ... is there something else that is happening? It seems like something has changed, but I can determine what/how to revert to what it was.

    Read the article

  • Python version priority in OSX/UNIX PATH environment variable

    - by mindthief
    Hi all, I want my system to use /usr/bin/python, but it's currently using /opt/local/bin/python, which points to /usr/bin/python2.6. I tried modifying the PATH variable in my .bashrc as PATH=~/bin:$PATH ...and then set a symbolic link in ~/bin to point to /usr/bin/python. i.e. ~/bin/python --> /usr/bin/python I figured this might prioritize this symlink over the /opt/local version if it came before the other one in the PATH variable, but when I opened a new shell I still found python pointing to /opt/local/bin. Any advice on a good way to get the system to use /usr/bin/python? Also, I usually use ipython as opposed to python directly. I'm assuming that if the system starts to use the correct version of python then ipython would also use that version? If not, how could I also get ipython to use the correct version? Thanks!

    Read the article

  • Boot-repair commands not found in PATH or not executable

    - by Bram Meerten
    I recently had problems with my ubuntu partition (after the battery died), I managed to fix them by running ubuntu from usb and run gparted. It worked I can access my files on the partition by running ubuntu from usb. But when I restart the computer, after selecting ubuntu in Grub, I get a black screen with a white underscore. I googled the problem, and tried to solve it by setting nomodeset, but it didn't work. Next I wanted to try to fix Grub using boot-repair, I clicked on 'Recommended repair', it tells me to type the following commands in the terminal: sudo chroot "/mnt/boot-sav/sda5" apt-get install -fy sudo chroot "/mnt/boot-sav/sda5" dpkg --configure -a sudo chroot "/mnt/boot-sav/sda5" apt-get purge -y --force-yes grub-common But when running the second command, I get this error: dpkg: warning: 'sh' not found in PATH or not executable. dpkg: warning: 'rm' not found in PATH or not executable. dpkg: warning: 'tar' not found in PATH or not executable. dpkg: error: 3 expected programs not found in PATH or not executable. Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. I didn't edit /etc/environment (or any other files), this is what it looks like: PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" RUNNING_UNDER_GDM="yes" I have no idea how to fix this. I'm running dualboot Ubuntu 12.04 and Windows 7, Windows boots fine.

    Read the article

  • Running command transparently over ssh

    - by jnsg
    By transparently I mean forwarding of: stdin, stdout and stderr standard signals (SIGHUP or SIGINT would be great for a start) As an example, consider these invocations of a (pointless) local and remote command: $ `cat - > /dev/null; sleep 10` < /local/file $ ssh user@host "cat - > /dev/null; sleep 10" < /local/file I can interrupt the first one with ^C just fine. But if I try this during the second one it only affects ssh, leaving the command running on the remote server if cat has already finished. I know about launching sshwith -t, but this way I can't send data via stdin. Is this possible with ssh alone at all?

    Read the article

  • Is there any way to make Mac OS X Spotlight only index the file names and not the contents?

    - by aalaap
    I do understand that the point of Spotlight is to look inside files, but it also returns file name matches, and that's what I need most of the time. Besides, Spotlight is running so absurdly slow on my system (Snow Leopard on the iMac '08), it's just unusable. I downloaded Canary and Spotlight wasn't able to find the app file for 15 minutes. It was already in the download stack, but as far as Spotlight goes, the file doesn't exist. Hence, I would like to know of a way to make Spotlight only index the file names, which would perhaps make it a bit faster. I'm looking at mimicking the behaviour of Windows applications such as AvaFind or Search Everything Edit: Let me highlight the fact that I am looking for an AvaFind or Search Everything replacement for Mac OS X. Go try one of these on a Windows machine and you'll understand my disappointment with Spotlight or any other popular search tools in OS X.

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >