Search Results

Search found 12601 results on 505 pages for 'z index'.

Page 243/505 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • EMC/Legato/Networker Failed to recover files : Cross Platform Recovery not supported.

    - by marc.riera
    Software used to backup: EMC / Legato Networker legato server : windows legato clients: same hardware (2 years ago fedora something , now ubuntu ) Trying to recover from an old client, which is no longer available. So this is the thing. On 07/20/2008 we backed up a samba server(fedora something) to a tape , setting 1 year as browse policy and retention policy. Now this tape is recyclable. We took down the dns name. We deleted the legato client configuration. That legato client was reinstalled and is doing other stuff on ubuntu 10.04, with a different name but same ip. Now, 2 years and some month later #### Now we need to recover a folder from 2008 backup, on the fedora-samba-server. First thing, legato does not show the client name because the config was deleted. We create it again. We just set the old dns back on track, pointing the same ip, where the old server was, same MAC address ;). We created a new 'old client configuration' pointing to the new server. (different legato ip for client "I suppose" ) The ssid where the needed folder is on 2 tapes, 20 and 22. The index for that backup is on tape 21. We put this tapes on the jukebox (IBMT4000) -- not important for the issue -- All three tapes expired its browsable and recoverable time. So they are on recyclable. We get the clone id from the ssid with following command: mminfo -avot -q "ssid=<ssid>" -r cloneid We set the tapes to notrecyclable nsrmm -S <ssid>/<cloneid> -o notrecyclable We change the retention for the tapes for a future date nsrmm -S <ssid> -e 01/20/2011 We check the dates are correct : mminf -avV -q "ssid=<ssid>" -r ssbrowse(26),ssretent(26),savetime So far its OK. We close the terminal. Restart the server, just for being sure. Finally, we recover the index for that ssid where the folder should be. nsrck -L7 -t "07/20/2008" oldservername.domain.org There, we open the Networker User, select the server, select the old client as source, select the new client as destination. And this is what I get. imgur image of output -- http://i.imgur.com/1nOr8.png Should I understand that I need to install whatsoever operating system that was running on the old "linux server"/"networker client" to be able to restore 26Mb of files? thanks

    Read the article

  • How to retrieve virtual machines from a pool via API in oVirt (RHEV)

    - by FerCa
    In oVirt (Red Hat Enterprise Virtualization) you can create a Virtual Machines Pool to allow your users to retrieve virtual machines from this pool. I found how a user, in the RHEV User Portal, can request a Virtual Machine from the pool, this is explained here: https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Virtualization/3.0/html/Evaluation_Guide/Evaluation_Guide-Allocate_VM.html The thing is that i will need to retrieve virtual machines from the pool with the REST API and, after reading the documentation (https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Virtualization/3.0/html-single/REST_API_Guide/index.html) I cant found the way to do this.

    Read the article

  • Backup solutions for Rackspace Cloud Sites

    - by Daniel A. White
    What options do I have for backing up the content from a Rackspace Cloud Sites including files and databases? I know they have cron jobs, but I am not sure what options I have when it comes to that. Here are some of the things the Cron jobs they support. http://cloudsites.rackspacecloud.com/index.php/How%5Fdo%5FI%5Fschedule%5Fa%5Fcron%5Fjob%3F

    Read the article

  • Exclude filetypes in a Textmate project

    - by cwd
    I know in TextMate I can go to preferences - advanced - folder references and play with the regex pattern to remove certain types of files and folders by default. I heard that if you have an existing project, however, and chance these, the project is not affected. From a similar but different angle, I am interested to know if I can exclude certain types of files, say anything named "index.html" from an existing project while not changing the global scope. Thanks!

    Read the article

  • How can I get JSON from Nginx Autoindex?

    - by Devrim
    How can I modify to Nginx’s autoindex so that it will generate a JSON version of the index instead of HTML? Or is there a module that already does that? I want to have this; http://u.kodingen.com/ClrG Instead of; http://u.kodingen.com/Cls2N This blog writer seems to have done it; http://u.kodingen.com/Clz3F http://lamsonproject.net/blog/2009-08-03.html But he didn't mention how.

    Read the article

  • What is a "PR"? What does PR stand for in the context of FreeBSD Ports

    - by Jared Updike
    Compare: http://www.freebsd.org/prstats/index.html A specific "PR": http://www.freebsd.org/cgi/query-pr.cgi?pr=134774 Does it stand for Portability Report? or something similar? I can tell it has to do with tracking bug reports and build problems for specific ports but the acronym is baffling me. It may also be used in the Linux world but Googling for "Linux PR" only yields results related to Public Relations. Apparently FreeBSD has PRs and Linux has Public Relations.

    Read the article

  • Exclude minify from CSF/LFD

    - by Patrick Lanfranco
    I have currently installed minify on on of my websites however I am currently getting hammered with email from CSF/LFD. Example: Time: Fri Aug 10 13:10:03 2012 +0700 File: /tmp/minify_builder,index.php_f516d1c7cae9c3881406fd9a0ce69c38 Reason: Script, file extension Owner: -:- (504:501) Action: No action taken What is the best way to have these ignored inside CSF? Some advice would be highyl appreciated. Thank you very much.

    Read the article

  • Software to aid writing Research Papers

    - by Rogue
    Have been working on this white paper for weeks, and have been having a very rough time with all the links and reference material i have found online, finally got them organized but this is a very manual procedure are there any software's that: 1) Organize , sort and bookmark your links and reference pages and give you one click access to them 2) Auto generate the bibliography, based on what you already linked 3) Give you templates of research paper layouts 4) Templates and examples of index's P.S: Need software for Windows XP, and it can be a paid software

    Read the article

  • Installed LAMP, but PHP does not seem to work

    - by Dave
    Hello, I did the yum install for apache, mysql, php, phpmyadmin. localhost/phpmyadmin displays correctly. I would assume PHP function correctly. I put my webpages in /var/www/html/test_site/index.php. This page contains phpinfo. But it does not display an echo"test"; What could be wrong Thanks Dave

    Read the article

  • Getting ajaxterm to work on Debian Lenny

    - by Kevin Duke
    I've been knocking my head out for a while on this one and have read many tutorials, but I just can't get this to work. Ajaxterm, is a webbased SSH client; once installed apt-get install ajaxterm and then enabling it with /etc/init.d/ajaxterm start I should be able to access the SSH terminal with http://mywebsite:8022/ But doing so only gives me a "Page not found", any suggestions? My actual VPS is: http://173.244.205.160 My sources: https://secure.kitserve.org.uk/content/setting-ajaxterm-Ubuntu-and-Debian-powerpc http://wiki.kartbuilding.net/index.php/ajaxterm

    Read the article

  • Granting read-write rights to my web application on VPS

    - by davykiash
    Am currently testing a bulk CSV import functionality web application and I came across a error The given destination is not writeable My application is zend based and uses the MVC structure application --uploads library --Zend public --index.php What Ubuntu command do I exectute to safely grant the necessary rights to my uploads folder in my web application?

    Read the article

  • Using photos from network drive with Windows 8 photo app

    - by Paul
    I have photos on my network drive that I want to display (in live tiles preferably in the photo app). Under c:\users\paul\pictures I have made a link to them, using mklink /d: And this works fine in classic: But nothing appears in the photo.app: I am guessing that this is an issue with indexing - the photos won't appear until they are indexed, and Windows won't normally index a network drive (unless you make it "available offline, which just copies the files locally) - but this is exactly what the mklink was supposed to work around, and the properties show it is indexable: Any ideas?

    Read the article

  • Default permission for newly-created files/folders using ACLs not respected by commands like "unzip"

    - by Ngoc Pham
    I am having trouble with setting up a system for multiple users accessing the same set of files. I've read tuts and docs around and played with ACLs but haven't succeeded yet. MY SCENARIO: Have multiple users, for example, user1 and user2, which is belong to a group called sharedusers. They must have all WRITE permission to a same set of files and directories, say underlying in /userdata/sharing/. I have the folder's group set to sharedusers and SGID to have all newly created files/dirs inside set to same group. ubuntu@home:/userdata$ ll drwxr-sr-x 2 ubuntu sharedusers 4096 Nov 24 03:51 sharing/ I set ACLs for this directory so I can have permission of sub dirs/files inheritted from its parents. ubuntu@home:/userdata$ setfacl -m group:sharedusers:rwx sharing/ ubuntu@home:/userdata$ setfacl -d -m group:sharedusers:rwx sharing/ Here's what I've got: ubuntu@home:/userdata$ getfacl sharing/ # file: sharing/ # owner: ubuntu # group: sharedusers # flags: -s- user::rwx group::r-x group:sharedusers:rwx mask::rwx other::r-x default:user::rwx default:group::r-x default:group:sharedusers:rwx default:mask::rwx default:other::r-x Seems okay as when I create new folder with new files inside and the permission is correct. ubuntu@home:/userdata/sharing$ mkdir a && cd a ubuntu@home:/userdata/sharing/a$ touch a_test ubuntu@home:/userdata/sharing/a$ getfacl a_test # file: a_test # owner: ubuntu # group: sharedusers user::rw- group::r-x #effective:r-- group:sharedusers:rwx #effective:rw- mask::rw- other::r-- As you can see, the sharedusers group has effective permission rw-. HOWEVER, if I have a zip file, and use unzip -q command to unzip the file inside the folder sharing, the extracted folders don't have group write permisison. Therefore, the users from group sharedusers cannot modify files under those extracted folders. ubuntu@home:/userdata/sharing$ unzip -q Joomla_3.0.2-Stable-Full_Package.zip ubuntu@home:/userdata/sharing$ ll drwxrwsr-x+ 2 ubuntu sharedusers 4096 Nov 24 04:00 a/ drwxr-xr-x+ 10 ubuntu sharedusers 4096 Nov 7 01:52 administrator/ drwxr-xr-x+ 13 ubuntu sharedusers 4096 Nov 7 01:52 components/ You an spot the difference in permissions between folder a (created before) and folder administrator extracted by unzip. And the ACLs of a files inside administrator: ubuntu@home:/userdata/sharing$ getfacl administrator/index.php # file: administrator/index.php # owner: ubuntu # group: ubuntu user::rw- group::r-x #effective:r-- group:sharedusers:rwx #effective:r-- mask::r-- other::r-- It also has ubuntu group, not sharedusers group as expected. Could someone please explain the problem and give me advice? Thank you in advance!

    Read the article

  • why the output of ls is like this

    - by dorelal
    I am using snow leopard and this is what I get in my terminal. By default I am using bash. > ls c* clock: PSD demo.html jquery.tzineClock script.js styles.css clock2: clojure-presentations: Clojure-1up.pdf ClojureInTheField-1up.pdf license.html Clojure-4up.pdf README ClojureForRubyists-1up.pdf keynote coffee-script: Cakefile README bin examples index.html package.json test LICENSE Rakefile documentation extras lib src vendor

    Read the article

  • Add a trailing slash mod_rewrite

    - by Conner Stephen McCabe
    just wondering how I add a trailing slash at the end of my URL's using Mod_Rewrite? This is my .htaccess file currently: RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^([^/]*)$ index.php?pageName=$1 My URL show like so: wwww.**.com/pageName I want it to show like so: wwww.**.com/pageName/ The URL is holding a GET request internally, but I want it to look like a genuine directory.

    Read the article

  • Zend Server Cannot restart PHP: permission denied for user

    - by user30115
    When I click "Restart PHP" in Zend Server web interface, I get this error in the logs: PHP Warning: Cannot restart PHP: permission denied for user IIS APPPOOL\DefaultAppPool. in C:\Program Files (x86)\Zend\ZendServer\GUI\application\CE\models\ZwasComponents\Util\Api\UserServer.php on line 86 Based on http://kb.zend.com/index.php?View=entry&EntryID=426 I tried to give permissions to user IIS APPPOOL\DefaultAppPool to the folder C:\Program Files (x86)\Zend\ZendServer\, however it still gives the same error. Do you know to what resources the application pool does not have permissions to?

    Read the article

  • Combining DocumentRoot and proxypass in Apache Web server

    - by user10211
    I have an application running on tomcat and fronted with Apache. My server name is www.abc.com so in my vHost setting I have DocumentRoot /home/user/www.abc.com ServeName www.abc.com ProxyPass /app http://localhost:8080/app ProxyPassReverse /app http://localhost:8080/app The DocumentRoot has a static file index.html, which I would like to serve when www.abc.com is requested and all other requests should be directed to tomcat via the proxypass. Which is the easiest way to achieve this? Thanks

    Read the article

  • how do i install intermediate certificate

    - by getmizanur
    I have installed private key (pem encoded) and public key certificate (pem encoded) on amazon load balancer however when i check the ssl with site test tool (http://www.networking4all.com/en/support/tools/site+check/), i get the following error Error while checking the SSL Certificate!! Unable to get the local issuer of the certificate. The issuer of a locally looked up certificate could not be found. Normally this indicates that not all intermediate certificates are installed on the server. i converted crt file to pem using these command from this tutorial openssl x509 -in input.crt -out input.der -outform DER openssl x509 -in input.der -inform DER -out output.pem -outform PEM during setting up of amazon load balancer only option i left out was certificate chain (pem encoded) however this was optional. could this be cause of my issue? and if so i how do i create certificate chain? for the last question i have tried googling however i'm getting more confused than before. please help many thanks in advance. UPDATE @all thanks for the helpful advice. if you make request to verisign they will give you a certificate chain however this chain includes public crt, intermediate crt and root crt. make sure to remove the public crt from your certificate chain (which is the top most certificate) before adding it to your certification chain box of your amazon load balancer. if you are making https request from an android app then above instruction may not work for older android os such as 2.1 and 2.2. to make it work on older android os [https://knowledge.verisign.com/support/ssl-certificates-support/index?page=content&id=AR657&actp=LIST&viewlocale=en_US]. on this link click on "retail ssl" tab and then click on "secure site" "CA Bundle for Apache Server". copy and past these intermediate certs into certificate chain box. just incase if you have not found it here is the direct link [https://knowledge.verisign.com/support/ssl-certificates-support/index?page=content&id=AR1409] if you are using geo trust certificates then solution is much the same for android devices however you need to copy and past their intermediate certs for android. PS: sorry for the long urls however "new users can only post a maximum of two hyperlinks"

    Read the article

  • Adding FreeBSD user upon first login

    - by Halik
    Is it possible, to achieve the proposed behavior on my FreeBSD 8.2 server: New user ssh's into my server. He supplies as 'Login:' his student index number and a new, locked account is created with random password that is sent to his [email protected] mail as authentication method. After he logs in with this password, account is fully created and activated/unlocked and the user is asked/forced to change the pass for a new one.

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? Thanks, M

    Read the article

  • open source knowledge base CMS system

    - by Thomi
    I'm looking for an open source knowledge base system that uses tags, rather than free-text search to identify articles (a lot like serverfault does). I've looked at twiki, which many people suggested, but haven't found what I'm looking for. Basically I want to be able to create and tag articles, and provide an easy way for anonymous users to search based on tags. Edit: OK, here's some more detail regarding what I want. Basically, all the knowledge base systems I have seen so far are a collection of articles, each article with a title. Most of them allow you to categorise articles into groups and sub-groups. Users of the system can search for information using a title search, for example "How do I print from AwesomeProduct?" - which then shows a list of any articles that match that search text. This is fine and dandy when your KB is for one version of the software product (the mythical AwesomeProduct ver 1.0). However, the development team then go ahead and create a new version (ver 2.0) that adds many new features and changes some existing features. Now, how do we support both products in the same KB? The Naive method is to copy all articles from 1.0, and update them for 2.0, adding and removing articles in 2.0 as required. We can then add text at the top of every 1.0 article that says: "this articles applies to 1.0 only, to see the 2.0 version, click here" (or something similar) The problem with articles being indexed in the system by title is that it's very hard to filter based on meta-data like version. What happens when we create version 3.0 or 4.0? The end-situation here is that you have a mess of articles. They're hard to search, hard to filter, and even harder to manage. The solution (it seems to me) is to use tags, rather than text as the article index mechanism. So articles can be tagged with a tag representing the software version, topic area etc. etc. Users can then filter based on tag - an example search might be "version_1 printing" - which straight away gives a list of articles with all these tags. So that's what I'm looking for - a KB system that uses tags, rather than text to index many articles. I'm sure I could build something with drupal, but I was hoping for something that worked out-of-the-box.

    Read the article

  • is it dangerous for the processor core to be *always* loaded at 100%?

    - by javapowered
    In my HFT software I plan to use one core for stock index calculation. That would be simply while(true) loop without any delays which will calculate (sum and multiply) components as often as possible (so millions times per second) and I plan to do that 8 hours per day every day. I was never before loading my computer to 100% full time every day regullary. May it be dangerous? Do processor has kind of "resource" (very big of course) after which it can stopped working?

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >