Search Results

Search found 19667 results on 787 pages for 'missing template'.

Page 356/787 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • why my server has a dir named "?"

    - by liuxingruo
    These are all the dirs in my server: ? bin boot dev etc home lib lost+found media media2 misc mnt net opt proc root sbin selinux srv sys tmp usr var why there is a "?" dir? Thanks very much. BTW: the touch command was found on my server(wiered). I list the bin dir: alsacard cp dd env hostname loadkeys more ps sed tcptraceroute alsaunmute cpio df ex igawk loadkeys.static mount pwd setfont traceroute6 arch csh dmesg false ipcalc logger mountpoint raw setserial tracert awk cut dnsdomainname fgrep kbd_mode login mv red sh view basename date doexec gawk keyctl ls netstat redhat_lsb_init sleep ypdomainname bash dbus-cleanup-sockets domainname gettext kill mail nice rm sort cat dbus-daemon dumpkeys grep ksh mailx nisdomainname rmdir stty chgrp dbus-monitor echo gtar ksh93 mkdir pgawk rpm su chmod dbus-send ed gunzip link mknod ping rvi sync chown dbus-uuidgen egrep gzip ln mktemp ping6 rview tar touch is missing, how can i get it back?

    Read the article

  • Linking to lua libraries w/ codeblocks on linux

    - by person
    After I downloaded the source for lua, I followed the install instructions, doing... make linux install make generic install I've also done the make test and it passes, printing out Hello World, from Lua 5.1. However, I can't link to the lua libraries in CodeBlocks. I know where lualib.a is (usr/local/lib) which I set in my Search Directories for the linker. I still get error messages like... undefined reference to lua_isstring Am I missing something critical here? P.S. I had this running on Windows via Visual Studio.

    Read the article

  • Shell script with ImageMagick: hangs forever?

    - by AP257
    I've generated a shell script that uses ImageMagick to convert and crop around 18000 images. Here's a sample entry (so there are 18000 of these): if [ ! -f ./cropped/16333-1.png ] then convert -crop 724x118+876+1989 ./lin/34.png ./cropped/16333-1.png echo cropping 16333-1 fi if [ ! -f ./cropped/16333-1_thumb.png ] then convert -define jpeg:size=400x100 ./cropped/16333-1.png -thumbnail '400x100>' -background transparent -gravity center -extent 400x100 ./cropped/16333-1_thumb.png echo thumbing 16333-1 fi The script only runs for about 2000 images before hanging forever. Am I missing something, or leaking memory somewhere? Thanks for your help!

    Read the article

  • Subversion multi checkout post-commit hook?

    - by FLX
    The title must sound strange but I'm trying to achieve the following: SVN repo location: /home/flx/svn/flxdev SVN repo "flxdev" structure: + Project1 ++ files + Project2 + Project3 + Project4 I'm trying to set up a post-commit hook that automatically checks out on the other end when I do a commit. The post-commit doc explicitly lists the following: # POST-COMMIT HOOK # # The post-commit hook is invoked after a commit. Subversion runs # this hook by invoking a program (script, executable, binary, etc.) # named 'post-commit' (for which this file is a template) with the # following ordered arguments: # # [1] REPOS-PATH (the path to this repository) # [2] REV (the number of the revision just committed) So I made the following command to test: REPOS="$1" REV="$2" echo "Updated project $REPOS to $REV" However when I edit files in Project1 for example, this outputs "Updated project /home/flx/svn/flxdev to 1016" I'd like this to be: "Updated project Project1 to 1016" Having this variable allows me to specify to do different actions per project post-commit. How can I specify the project parameter? Thanks! Dennis

    Read the article

  • Subversion multi checkout post-commit hook?

    - by FLX
    The title must sound strange but I'm trying to achieve the following: SVN repo location: /home/flx/svn/flxdev SVN repo "flxdev" structure: + Project1 ++ files + Project2 + Project3 + Project4 I'm trying to set up a post-commit hook that automatically checks out on the other end when I do a commit. The post-commit doc explicitly lists the following: # POST-COMMIT HOOK # # The post-commit hook is invoked after a commit. Subversion runs # this hook by invoking a program (script, executable, binary, etc.) # named 'post-commit' (for which this file is a template) with the # following ordered arguments: # # [1] REPOS-PATH (the path to this repository) # [2] REV (the number of the revision just committed) So I made the following command to test: REPOS="$1" REV="$2" echo "Updated project $REPOS to $REV" However when I edit files in Project1 for example, this outputs "Updated project /home/flx/svn/flxdev to 1016" I'd like this to be: "Updated project Project1 to 1016" Having this variable allows me to specify to do different actions per project post-commit. How can I specify the project parameter? Thanks! Dennis

    Read the article

  • Any way to know what files were in a broken ZFS pool?

    - by Erik Tjernlund
    I have a large ZFS pool of 4 combined drives. Now, the filesystem can not be mounted: pool: tank state: UNAVAIL status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-3C scan: none requested config: NAME STATE READ WRITE CKSUM tank UNAVAIL 0 0 0 insufficient replicas c10t0d0 ONLINE 0 0 0 c8t0d0 UNAVAIL 0 0 0 cannot open c8t1d0 ONLINE 0 0 0 c10t1d0 ONLINE 0 0 0 Probably a broken drive (c8t0d0). I'm not overly concerned by the loss of the data, but I'd love to know exactly which files were in that pool. Is there any way to get a listing of what files were there?

    Read the article

  • What's the fastest and automatic way to transfer 2GB of data between 2 PCs every night?

    - by phan
    While it's fast (less than 2 minutes) I hate having to copy files from PC #1 onto a USB stick, and then manually popping it in PC #2 to copy the files to PC #2. Dropbox is too slow in uploading and then downloading 2GBs (synching), it could take hours. Copying 2GBs over the network is also slow because we're dealing with 10,000 little files that totals 2GBs, and not just one, giant 2gb file. Not sure why, but dealing with 10,000 little files makes the copy process much longer. Is there any other method that I'm missing? Any ideas? I'm using Win7 on both PCs. Edit: These files change every single night.

    Read the article

  • Why can't I create an Alias Resource Record Set for an EC2 instance

    - by praterade
    I have been working with AWS for over a year, setting up EC2 instances, domains, ELBs, etc. When I want to assign a subdomain to an EC2 instance, I have to create an elastic IP (that I pay for), then assign a CNAME record to that elastic IP. When I want to assign a subdomain to an ELB (load balancer) instance, I just create an alias resource record set to the ELB. I've read over the docs and don't understand why AWS doesn't support aliasing to instances. Am I missing a key concept here? Wouldn't it be simpler to just alias EC2 instances and skip the whole elastic IP bit?

    Read the article

  • cannot connect to MS FTP 7.5 on Windows 2008 on Amazon EC2 instance

    - by minerj
    I have just installed the MS FTP 7.5 upgrade on my Windows 2008 Server (Service Pack 2) running on an Amazon EC2 instance. In the FTP Firewall Support settings for the server in IIS Manager I have set up the passive port range 45001 - 45005 and also set the External Firewall IP address to match the assigned Amazon Elastic IP address. Using the AWS Console I changed the Security Group for the server to allow access to the server through ports 21 and 45001 through 45005. Using an FTP client (either the command line FTP client or Windows Explorer) on the Amazon server I can connect to the FTP server but I cannot connect with an external FTP client. When I checked to see which ports were open on the server using Shields Up it shows that port 21 is open but ports 45001 to 45005 are closed. I assume I'm missing something. Any help greatly appreciated.

    Read the article

  • IBM storage ds3400 Cant connect to management using fiber

    - by Eli B
    i have a problem with a DS3400 IBM storage system we bought a few years back. when i try to manage the storage using its IBM storage management i cant find it using automatic detection even though its connected directly using the fiber and i can see the Logical drives connected and working properly. when i tried to connect the two management Ethernet wires and manage the storage directly by entering the IP address manually i am able to connect however after i make several changes to the controller configuration one of the controllers stops responding and i am not able to ping it directly (since you cant make any changes without being connected to both controllers this is a problem) whats more bizarre is that when i change the IP of the controller that doesn't respond it starts working .. i have found some articles over the web explaining stuff about LUN31 being missing and causing similar problems however all my attempts to manually configure it failed . *link to an example http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5075711 in short im trying to get my storage to appear in the storage manager when directly connected using only the fiber cable directly attached. thanks in advance

    Read the article

  • Hiera + Puppet classes

    - by Amadan
    I'm trying to figure out Puppet (3.0) and how it relates to built-in Hiera. So this is what I tried, an extremely simple example (I'll make a more complex hierarchy when I manage to get the simple one working): # /etc/puppet/hiera.yaml :backends: - yaml :hierarchy: - common :yaml: :datadir: /etc/puppet/hieradata # /etc/puppet/hieradata/common.yaml test::param: value # /etc/puppet/modules/test/manifests/init.pp class test ($param) { notice($param) } # /etc/puppet/manifests/site.pp include test If I directly apply it, it's fine: $ puppet apply /etc/puppet/manifests/site.pp Scope(Class[Test]): value If I go through puppet master, it's not fine: $ puppet agent --test Could not retrieve catalog from remote server: Error 400 on SERVER: Must pass param to Class[Test] at /etc/puppet/manifests/site.pp:1 on node <nodename> What am I missing? EDIT: I just left the office but a thought struck me: I should probably restart puppet master so it can see the new hiera.conf. I'll try that on Monday; in the meantime, if anyone figures out some not-it problem, I'd appreciate it :)

    Read the article

  • Excel Graph: How can I turn data below in to a 'time based' graph

    - by Mike
    In my spreadsheet I am collecting time periods when certain values have been changed. The user is restricted to 4 time periods. I would like to show the data based on thos time periods. I've included a mock up' of the data and the type of graph I would like to create. I've tried to create it for the last hour but am obviously missing something so thought I'd ask around. http://i48.tinypic.com/55lezr.jpg Many thanks for any help Mike P.S How do I make this image appear in the message and not as a link?

    Read the article

  • How do I get write access to ubuntu files from Windows?

    - by Steven
    I'm running Ubuntu 11.10 on my Virtual Machine as a web server. I've mounted the W:/ drive in Win 7 to my /www folder in Ubuntu. I can read the files, but I'm not able to write to the files. In Samba, I have created the following user: <www-data> = "<www-data>" And given guest ok for the www folder: [www] comment = Ubuntu WWW area path = /var/www browsable = yes guest ok = yes read only = no create mask = 0755 ;directory mask = 0775 force user = www-data force group = www-data I've also run sudo chmod -R 755 www to make ensure correct rw access. What am I missing in order to get write access to my ubuntu files from Windows?

    Read the article

  • Apache + LDAP Auth: access to / failed, reason: require directives present and no Authoritative hand

    - by Karolis T.
    Can't solve this one, here's my .htaccess: AuthPAM_Enabled Off AuthType Basic AuthBasicProvider ldap AuthzLDAPAuthoritative on AuthName "MESSAGE" Require ldap-group cn=CHANGED, cn=CHANGED AuthLDAPURL "ldap://localhost/dc=CHANGED,dc=CHANGED?uid?sub?(objectClass=posixAccount)" AuthLDAPBindDN CHANGED AuthLDAPBindPassword CHANGED AuthLDAPGroupAttribute memberUid AuthLDAPURL is correct, BindDN and BindPassword are correct also (verified with ldapvi -D ..). Apache version: Apache/2.2.9 (Debian) The error message seems cryptic to me, I have AuthzLDAPAuthoritative on so where's the problem. EDIT: LDAP modules are loaded, the problem is not with them being missing. # ls /etc/apache2/mods-enabled/*ldap* /etc/apache2/mods-enabled/authnz_ldap.load /etc/apache2/mods-enabled/ldap.load EDIT2: Solved it by changing funky Require ldap-group cn=CHANGED, cn=CHANGED line with Require valid-user Since AuthzLDAPAuthoritative is on, no other auth methods will be used and valid-user requirement will auth via LDAP. (right? :/)

    Read the article

  • cannot connect to MS FTP 7.5 on Windows 2008 on Amazon EC2 instance

    - by minerj
    I have just installed the MS FTP 7.5 upgrade on my Windows 2008 Server (Service Pack 2) running on an Amazon EC2 instance. In the FTP Firewall Support settings for the server in IIS Manager I have set up the passive port range 45001 - 45005 and also set the External Firewall IP address to match the assigned Amazon Elastic IP address. Using the AWS Console I changed the Security Group for the server to allow access to the server through ports 21 and 45001 through 45005. Using an FTP client (either the command line FTP client or Windows Explorer) on the Amazon server I can connect to the FTP server but I cannot connect with an external FTP client. When I checked to see which ports were open on the server using Shields Up it shows that port 21 is open but ports 45001 to 45005 are closed. I assume I'm missing something. Any help greatly appreciated.

    Read the article

  • Excel: making line charts so the line goes through all data points

    - by Mike
    Hi I've got data based on over 50+ years for various products. Unfortunately not all products have data for each year. I've created a line chart to show the movement (quantity sold) of these products over the years. It works well, except where the data points are too far apart i.e. 1965 and then 1975. For some reason there is no line. It's not perfect data because of the missing years, but I can live with that, I just want to see the trend, and not just sporadic dots; squares or crosses. Any help or links greatly appreciated. Mike

    Read the article

  • Windows Phone 8 not interfacing with Zune software

    - by Cyberherbalist
    I just got a Lumia 521 with Windows Phone 8 and am trying to get the device to work with the Zune software on my PC. I still have my Windows Phone 7 device, and the new one is not working the same way. When I plug the WP7 device into the PC's USB port, it automatically fires up Zune and I can sync my podcasts and music etc to the device. But when I plug the Lumia into the PC it doesn't fire up Zune, and if I start Zune manually, it doesn't interface with the phone at all. Perhaps I am missing something and WP8 isn't supposed to use Zune to interface with the phone?

    Read the article

  • Can I run AD commands from a standard PowerShell script?

    - by Ben
    I am putting together a script to run post-sysprep. It should check if the machine is on the network, and if it is then it should query AD to see if a computer account exists with it's service tag (we're using these as the hostnames of the machines.) If it does exist, it should delete the account and rejoin the machine to the domain. I have got the majority of the script running, but need to run the following: Remove-ADComputer -Identity $distinguishedName How can I run this from the "standard" powershell environment? I don't want to use the AD module. (By the way - I'm on a mixed mode 2000/03 domain as we are in the process of upgrading to 2008) I'm new to PowerShell so be gentle if I'm completely missing the point! Thanks, Ben

    Read the article

  • Configure all hosts, then create a list of the config for all hosts?

    - by AME
    I deployed a huge number of hosts with Ansible - which did work very nice. Each host got its individual settings and configuration. Now I'd like to generate a config file for another system that uses these hosts. For it, I need for every host a part of the generated configuration (the one that configures the database). Here is an example of the situation with two hosts having different configuration and the other system that uses a part of the Ansible-generated configuration: host1 ansible configured dbA host2 ansible configured dbQ The other system: host1 = dbA host2 = dbQ The values are computed differently (dbQ instead of dbB for host2 for example) if it belongs in a different cluster and so on, making it unpractical to just read out host configuration from the host_vars. I believe I would need to iterate over the hosts and let Ansible figure out the computed values for the variables like it would when deploying, but I do not know how to put that result in one template. Please advise :)

    Read the article

  • How to clear Outlook's Exchance cache address book information

    - by Assaf
    When a new email address is added to our company's Exchange server it doesn't show up immediately on my Outlook, and I suspect that it's because of the "cached mode". When I disable cached mode and restart outlook I see the new address fine. But when I restore cached mode and restart outlook it's missing again. So I guess the cache wasn't updated by this move. I tried deleting the .nk2 file in %appdata%\Microsoft\Outlook, but that didn't help. How can I force Outlook to clear its address book cache?

    Read the article

  • Prevent registry changes by users

    - by graf_ignotiev
    Background: I run a small computer lab of 10 computers using Windows 7 x64 Enterprise. Our users are set up as limited users. For additional restrictions, I set up local group policy for non-administrators using the microsoft management console. Problem: Recently, I found out that some of these restrictions had been removed. Reviewing the settings MMC and in ntuser.pol showed that the settings should still be in place. However, the related registry settings were missing in ntuser.dat. I already have registry editing disabled in the GPO (though not in silent mode). Question: What is the best way to deal with this situation? Should I look into preventing registry setting changes? Should I set up registry auditing to found out how these keys are getting changed in the first place? Or should I give up the ghost and write some kind of logon script that enforces registry values if they've been change? Any other ideas?

    Read the article

  • How do I convert an animated GIF to a YouTube friendly video format?

    - by Dave Webb
    My son has made some animations with Pivot Stickfigure Animator which we'd like to upload to YouTube. The problem is Pivot saves as animated GIFs which I can't upload to YouTube. The Wikipedia article recommends using Windows Movie Maker to convert GIF to WMV, but unfortunately I'm using Windows 7 for which you can get the new Windows Live Movie Maker which doesn't seem to support GIFs. I Googled and found an article which said to use Beneton Movie GIF to convert animated GIF to AVI, but this seemed to rely on a 3rd Party application which wasn't installed and so failed. Installing the missing application - pjBmp2Avi - by hand and adding it to the path still didn't allow Beneton to do the conversion. I hoped FFmpeg might do the trick but this only outputs to animated GIFs, it won't read from then. Further Googling found lots of applications with 30 day trials and so on but I was hoping for something free. So any suggestions on how I can convert an animated GIF to a movie file on Windows using free (as in beer) software?

    Read the article

  • File creation time on Windows vs Linux

    - by Sergei
    We have following setup: mountserver - debian linux fileserver1 - Windows 2008 R2 Storage server fileserver2 - Celerra NS20 exporting CIFS share workstation - windows 7 with mapped drive to share on fileserver2 What we are doing: mounted share from fileserver1 on mountserver, e.g. /shared/fileserver1 mounted share from fileserver2 on mountserver, e.g. /shared/fileserver2 ran rsync on mountserver to sync data from fileserver1 to fileserver2.Used atime as parameter to sync data not older than X after a while tried to delete data older that Y on /shared/fileserver2. From what I see, linux stat command on mountserver returns following when quering file on /shared/fileserver2: At the same time when I open property for the same file using mapped drive connected to fileserver2,I see following for the same file: As you can see, Created date of 12 August shown in Windows Explorer is nowhere to be seen using stat command Am I missing something here?

    Read the article

  • How to create a bootable Ubuntu Linux (10.04) USB installation for Macintosh

    - by vdavidovski
    I tried searching the Internet, but could not find a decent tutorial explaining how to create a bootable Ubuntu Linux (10.04) USB installation that could be run not only on a PC but also on Macs and MacBook Pros. In addition, I tried refit, but ended with "Missing operating system" error. Here is basically the layout of my bootable under PC Ubuntu USB drive (using MBR): Partition 1 (ext3, bootable) - Ubuntu Linux 32 bit, contains also grub2 bootloader. Partition 2 (ext3) - Ubuntu Linux 64 bit. Partition 3 (fat32) - contains data. What would be the best way to enable this drive to boot under Mac OS X? And if refit has to be used, could I simply have one more partition on the USB drive containing it? Thanks!

    Read the article

  • Installing Linux from External Card Reader

    - by Subhamoy Sengupta
    I have this problem. I was experimenting if I could use a memory card (SDHC) as an USB drive for all intents and purposes, and when I put the card in an USB card reader, I can use it just like regular USB stick and it also shows up in the BBS popup menu as an USB stick. When I tried to create an installation media out of it like this: sudo dd if=/path/to/image of=/dev/sdb And tried to boot from it, simply nothing happened. Cursor blinked a couple times, and jumped to the GRUB of my pre-existing GNU/Linux installation. What am I missing here? Is this not doable? I tried this with Xubuntu 12.04 and ArchLinux, by the way. I have also tried UNetBootIn instead of dd. Nothing happened differently.

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >