Search Results

Search found 20846 results on 834 pages for 'current dir'.

Page 204/834 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • post-receive hook permission denied "unable to create file" error

    - by ThomasReggi
    Just got gitolite installed on my webserver and am trying to get a post-receive hook that can point the git dir in apache's direction. This is what my post-receive hook looks like. Got this script from the Using Git to manage a web site. #!/bin/sh echo "post-receive example.com triggered" GIT_WORK_TREE=/srv/sites/example.com/public git checkout -f This is the error response i'm getting back from git push origin master from my local workstation. These are files from within my repository. remote: post-receive example.com triggered remote: error: unable to create file .htaccess (Permission denied) remote: error: unable to create file .tm_sync.config (Permission denied) remote: fatal: cannot create directory at 'application': Permission denied Permissions of public. drwxr-xr-x 5 root root 4096 Jun 26 17:23 public

    Read the article

  • ddclient to update namecheap subdomain?

    - by LF4
    I have a subdomain that I want to update with ddclient. I configured the ddclient to get the IP from dyndns but it's not updating the subdomain on namecheap. They said to use yourdomain.com as the login instead of my actual domain. Has anyone been able to get namecheap DNS updated with ddclient? I'm running CentOS 6.2 with ddclient 3.7.3. When I run ddclient I get the following. CONNECT: checkip.dyndns.org CONNECTED: using HTTP SENDING: GET / HTTP/1.0 SENDING: Host: checkip.dyndns.org SENDING: User-Agent: ddclient/3.7.3 SENDING: Connection: close SENDING: RECEIVE: HTTP/1.1 200 OK RECEIVE: Content-Type: text/html RECEIVE: Server: DynDNS-CheckIP/1.0 RECEIVE: Connection: close RECEIVE: Cache-Control: no-cache RECEIVE: Pragma: no-cache RECEIVE: Content-Length: 106 RECEIVE: RECEIVE: <html><head><title>Current IP Check</title></head><body>Current IP Address: IPADD</body></html> Use of uninitialized value in string ne at /usr/sbin/ddclient line 1998. WARNING: skipping update of lf4bot from <nothing> to IPADD WARNING: last updated <never> but last attempt on Fri Jun 15 22:46:21 2012 failed. WARNING: Wait at least 5 minutes between update attempts. ddclient.conf File daemon=300 # check every 300 seconds syslog=yes # log update msgs to syslog mail=root # mail all msgs to root mail-failure=root # mail failed update msgs to root pid=/var/run/ddclient.pid # record PID in file. ssl=yes # use ssl-support. Works with use=web, web=checkip.dyndns.org/, web-skip='IP Address' # found after IP Address protocol=namecheap \ server=dynamicdns.park-your-domain.com \ login=yourdomain.com \ password=PASSWORD \ lf4bot

    Read the article

  • Squeeze/Lenny compilation : Library Link error

    - by PEZ
    I've got a problem here : I have a C++ Library ("DataTsBroad") and a C++ Test app ("DataTsBroadTest"), to test it. Actually, the Lib and the Test app are both compiled an a Debian Lenny. Now, i want to continue to compile my Test app on a Debian Lenny (customer constraint), but i would compile my lib on a Squeeze or a Wheezy to work on the last Debian releases. So, i successfully compiled my Lib on a Squeeze, But, after, when i try to compile my Test app with this Lib on the Lenny, it fails ! There is a Link Error : Linking CXX executable DataTsBroadTest /home/nis/pezierg/test/ProductMak/Export/DataTsBroad/L64/Release/libDataTsBroad64.so: undefined reference to `std::ctype::_M_widen_init() const@GLIBCXX_3.4.11' collect2: ld returned 1 exit status make[2]: *** [DataTsBroadTest] Error 1 make[1]: *** [CMakeFiles/DataTsBroadTest.dir/all] Error 2 make: * [all] Error 2 The problem is certainly due to ostream C++ Lib, I tried to comment all it's uses in my Lib and it works. But how can i really fix the problem ?

    Read the article

  • Website crawler/spider to get site map

    - by ack__
    I need to retrieve a whole website map, in a format like : http://example.org/ http://example.org/product/ http://example.org/service/ http://example.org/about/ http://example.org/product/viewproduct/ I need it to be linked-based (no file or dir brute-force), like : parse homepage - retrieve all links - explore them - retrieve links, ... And I also need the ability to detect if a page is a "template" to not retrieve all of the "child-pages". For example if the following links are found : http://example.org/product/viewproduct?id=1 http://example.org/product/viewproduct?id=2 http://example.org/product/viewproduct?id=3 I need to get only once the http://example.org/product/viewproduct I've looked into HTTtracks, wget (with spider-option), but nothing conclusive so far. The soft/tool should be downloadable, and I prefer if it runs on Linux. It can be written in any language. Thanks

    Read the article

  • Forward real IP through Haproxy => Nginx => Unicorn

    - by Hendrik
    How do I forward the real visitors ip adress to Unicorn? The current setup is: Haproxy => Nginx => Unicorn How can I forward the real IP address from Haproxy, to Nginx, to Unicorn? Currently it is always only 127.0.0.1 I read that the X headers are going to be depreceated. http://tools.ietf.org/html/rfc6648 - how will this impact us? Haproxy Config: # haproxy config defaults log global mode http option httplog option dontlognull option httpclose retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 # Rails Backend backend deployer-production reqrep ^([^\ ]*)\ /api/(.*) \1\ /\2 balance roundrobin server deployer-production localhost:9000 check Nginx Config: upstream unicorn-production { server unix:/tmp/unicorn.ordify-backend-production.sock fail_timeout=0; } server { listen 9000 default; server_name manager.ordify.localhost; root /home/deployer/apps/ordify-backend-production/current/public; access_log /var/log/nginx/ordify-backend-production_access.log; rewrite_log on; try_files $uri/index.html $uri @unicorn; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_redirect off; proxy_pass http://unicorn-production; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; }

    Read the article

  • ZFS with L2ARC (SSD) slower for random seeks than without L2ARC

    - by Florian Kruse
    I am currently testing ZFS (Opensolaris 2009.06) in an older fileserver to evaluate its use for our needs. Our current setup is as follows: Dual core (2,4 GHz) with 4 GB RAM 3x SATA controller with 11 HDDs (250 GB) and one SSD (OCZ Vertex 2 100 GB) We want to evaluate the use of a L2ARC, so the current ZPOOL is: $ zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM afstank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c11t0d0 ONLINE 0 0 0 c11t1d0 ONLINE 0 0 0 c11t2d0 ONLINE 0 0 0 c11t3d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c13t0d0 ONLINE 0 0 0 c13t1d0 ONLINE 0 0 0 c13t2d0 ONLINE 0 0 0 c13t3d0 ONLINE 0 0 0 cache c14t3d0 ONLINE 0 0 0 where c14t3d0 is the SSD (of course). We run IO tests with bonnie++ 1.03d, size is set to 200 GB (-s 200g) so that the test sample will never be completely in ARC/L2ARC. The results without SSD are (average values over several runs which show no differences) write_chr write_blk rewrite read_chr read_blk random seeks 101.998 kB/s 214.258 kB/s 96.673 kB/s 77.702 kB/s 254.695 kB/s 900 /s With SSD it becomes interesting. My assumption was that the results should be in worst case at least the same. While write/read/rewrite rates are not different, the random seek rate differs significantly between individual bonnie++ runs (between 188 /s and 1333 /s so far), average is 548 +- 200 /s, so below the value w/o SSD. So, my questions are mainly: Why do the random seek rates differ so much? If the seeks are really random, they should not differ much (my assumption). So, even if the SSD is impairing the performance it should be the same in each bonnie++ run. Why is the random seek performance worse in most of the bonnie++ runs? I would assume that some part of the bonnie++ data is in the L2ARC and random seeks on this data performs better while random seeks on other data just performs similarly like before.

    Read the article

  • Outputting SVN hook messages

    - by Luke Segars
    Hi all, I have a subversion repository on my Linux machine that is set up to export a new build of a project every time a new commit occurs using a post-commit hook. I would really like to be able to provide an output message to the committer containing some status information once the hook completes. Is it possible to redirect the output of the hook to come after the standard commit messages? For example: owner@dev-machine:/working/dir$ svn commit Sending FILE1 Sending FILE2 Transmissing file data ... Committed revision 13. Exporting project... Successfully exported to mysite.com The addition of the last two lines is the functionality I'm looking for.

    Read the article

  • Disallow everything in directory except subdirectories with certain name using .htaccess or vhost.conf

    - by Jizbo Jonez
    How can I disallow everything in a directory except any subdirectories that have a certain name using htaccess or even vhost.conf for the site? I have a protected main directory which contains subdirectories. Each of the subdirectories have another child dir named 'thumbs', that I want to allow people access to. In the thumbs folder I have files which all start with 'thumb_' prefix for the filename. So which approach to use, .htaccess FilesMatch or vhost DirectoryMatch? and what would be the code to use? Thanks

    Read the article

  • Persistent network share connection not working with runas

    - by binarycoder
    If I use runas /user:DOMAIN\user cmd.exe (using XP), previously mapped persistent network drives are considered unavailable. net use shows: Status Local Remote Network ------------------------------------------------------------------------------- Unavailable H: \\SERVER\SHARE Microsoft Windows Network dir H: fails with "The system cannot find the path specified.". The connection is easily revived with `NET USE H: \SERVER\SHARE': not asked for a password when I do this. What is going on? Can I make Windows safely revive this drive automatically when it is first accessed.

    Read the article

  • Calculate disk space occupied by many .png files

    - by Alexander Farber
    I have 357 .png files located in different sub dirs of the current dir: settings# find . -name \*.png |wc -l 357 settings# find . -name \*.png | head ./assets/authenticationIcons/audio.png ./assets/authenticationIcons/bbid.png ./assets/authenticationIcons/camera.png ./bin/icons/ca_video_chat.png ./bin/icons/ca_voice_control.png ./bin/icons/ca_vpn.png ./bin/icons/ca_wifi.png Is there a oneliner to calculate the total disk space occupied by them (before I pngcrush them)? I've tried (unsuccessfully): settings# find . -name \*.png | xargs du -s 4 ./assets/support/wifi_locked_icon_white.png 1 ./assets/support/wifi_vpn_icon_connected.png 1 ./assets/support/wi_fi.png 1 ./assets/support/wi_fi_conected.png 8 ./bin/blackberry-tablet-icon.png 2 ./bin/icons/ca_about.png 2 ./bin/icons/ca_accessibility.png 2 ./bin/icons/ca_accounts.png 2 ./bin/icons/ca_airplane_mode.png 2 ./bin/icons/ca_application_permissions.png 1 ./bin/icons/ca_balance.png

    Read the article

  • Help me please with this error

    - by Brandon
    I setup IIS. I moved my folder with all the files to the IIS directory. Now when I go to http://localhost/thefolder I get: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.BadImageFormatException: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B) Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [BadImageFormatException: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B)] Luxand.FSDK.ActivateLibrary(String LicenseKey) +0 FaceRecognition._Default.Page_Load(Object sender, EventArgs e) in D:\Project Details\Layne Projects\DotNet Project\FaceRecognition\FaceRecognition\Default.aspx.cs:60 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +25 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +42 System.Web.UI.Control.OnLoad(EventArgs e) +132 System.Web.UI.Control.LoadRecursive() +66 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2428

    Read the article

  • SQL Server architecture - they want to move my database to new instance...Why?

    - by O'MALLEY
    Our current production database environment contains about 10 similarily managed databases. Our agency has just purchased and is installing new blade chasses and wants to move my database to a new instance (leaving the other 9 on another). This decision is being driven by one of our IT staff, not a DBA. I am a project manager, not a DBA but I know enough to not necesarrily have a good feeling about this decision and I am urging our IT department to make a sound decision based on what is best for the database. Our IT department has stated that it is not good to have all our eggs in one basket, and has also stated that my database contains "regulatory data" so it should be on its own instance. A couple of truths: - None of the databases on the current instance are OLTP databases nor are any of them data warehouses - My database currently has joins/views to a couple of the other databases in the production environment So my questions are as follows: Am I wrong to disregard a statement about eggs in baskets? (hello, this is why we have maintenance plans/disaster recovery plans). I'll mention that other databases also have regulatory data too. What types of questions do I need to ask to determine if this is a sound decicion? (A DBA friend mentioned that if the service level agreement of said database does not radically differ from the others then why do they want to do this?) I have done some research on linked servers. What arguments should I bring forth about the fact that I have views setup that rely on data from other DBs currently?

    Read the article

  • SVN: Error validating server certificate for svn hook linux

    - by Dr Casper Black
    Hi, I managed to setup a SVN (over SSL) server and TortoiseSVN client on Win. I made a Post-Commit Hook for test project. The Post-Commit will update the web dir so the App in PHP can be executed with the newest version. It all works when done over shell. The only problem is, when i commit the changes over the client in Win the change is commited but HOOK throws error post-commit hook failed (exit code 1) with output: Error validating server certificate for 'https://SERVER_IP:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! - The certificate hostname does not match. Certificate information: - Hostname: DEVSRVR - Valid: from Fri, 28 Jan 2011 09:22:45 GMT until Sat, 28 Jan 2012 09:22:45 GMT - Issuer: PHP, SS, SS, SRB - Fingerprint: 5f:d0:50:d6:dd:a6:d4:64:a5:ac:3a:4b:7c:7d:33:e3:75:dd:23:9f (R)eject, accept (t)emporarily or accept (p)ermanently? svn: OPTIONS of 'https://SERVER_IP/svn/myproject/trunk': Server certificate verification failed: certificate issued for a different hostname, issuer is not trusted (https://SERVER_IP)

    Read the article

  • Need some help with Apache .htaccess

    - by Legend
    I am trying to setup an application that was built using the Zend framework. Let's say my subdomain is: http://subdomain.domain.com and that it points to the following: http://www.domain.com/projectdir/ The structure of the project dir is the following: application/ ... ... library/ ... ... public/ ... ... .htaccess The contents of the htaccess are: SetEnv APPLICATION_ENV production RewriteEngine On # skip existing files and folders RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] # send everything to index RewriteRule ^.*$ index.php [NC,L] While this works, the child objects on the page are being directed to the domain i.e., the image URLs (and the CSS files etc.) are broken because they are being redirected to something like: http://www.domain.com/images/image.png Can someone please tell me how to fix this?

    Read the article

  • Is it possible to use SELinux MCS permissions with Samba?

    - by Yuri
    Created a user1: adduser --shell /sbin/nologin --no-create-home user1 passwd user1 smbpasswd -a user1 smbpasswd -e user1 semanage login -a -s "unconfined_u" -r "s0-s0:c0" user1 Added a category c0 for the folder ./123 inside the Samba share chcat s0:c0 /share/123/ After that the user1 can't go into this folder: type=AVC msg=audit(1332693158.129:48): avc: denied { read } for pid=1122 comm="smbd" name="123" dev=sda1 ino=786438 scontext=system_u:system_r:smbd_t:s0 tcontext=unconfined_u:object_r:samba_share_t:s0:c0 tclass=dir But if remove the c0 category: restorecon -v /share/123/ user1 opens folder with no problem. Is I'm doing something wrong or Samba doesn't support SELinux MCS? Have installed on CentOS 6.2 are: samba3.i686 3.6.3-44.el6 @sernet-samba selinux-policy.noarch 3.7.19-126.el6_2.10 @updates selinux-policy-targeted.noarch 3.7.19-126.el6_2.10 @updates

    Read the article

  • Photoshop not loading fonts correctly

    - by Brett
    I'm running Photoshop CS4 on WinXP and I have been sent a PSD as well as the needed fonts. I have installed all the fonts I got into the Windows fonts directory. When I open up the PSD in photoshop it tells me I am missing some fonts, then when I go to click on some text it tells me you are missing Roboto Bold or Roboto Regular. I check the font dropdown list in Photoshop and I see Roboto and that's it, not sure if that is correct or not? I then go to the windows fonts dir to confirm all the fonts are there and they are. See below.. Now I'm not sure if it makes any difference that the PSD I believe was made on a MAC & I'm on a PC?

    Read the article

  • Configuring three monitors with two Radeon X1600/X1650 graphics cards under Ubuntu

    - by cpm
    I have three SyncMaster 932a monitors I want to use with two Radeon X1600/X1650 cards under Linux. I am running X.org X Server 1.6.0, as provided by Ubuntu's Wubi installer. After turning off mirroring, I ended up with this xorg.conf: Section "Monitor" Identifier "Configured Monitor" EndSection Section "Screen" Identifier "Default Screen" Monitor "Configured Monitor" Device "Configured Video Device" SubSection "Display" Virtual 2560 1024 EndSubSection EndSection Section "Device" Identifier "Configured Video Device" EndSection The left monitor had a menu bar and a task bar, the center monitor was just desktop, and windows would maximize to the current monitor. The third monitor and second graphics card weren't being used at all. Then I changed my configuration to manually specify each card with their PCI bus: Section "ServerLayout" Identifier "TheLayout" Screen 0 "Radeon Screen 1" Screen 1 "Radeon Screen 2" RightOf "Radeon Screen 1" EndSection Section "Screen" Identifier "Radeon Screen 1" Monitor "Configured Monitor" Device "Radeon the First" SubSection "Display" Virtual 2560 1024 EndSubSection EndSection Section "Screen" Identifier "Radeon Screen 2" Monitor "Configured Monitor" Device "Radeon the Second" EndSection Section "Device" Identifier "Radeon the First" Driver "radeon" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "Radeon the Second" Driver "radeon" BusID "PCI:2:0:0" EndSection Section "Monitor" Identifier "Configured Monitor" EndSection Now both the left and right monitors have task bars and menu bars. Windows cannot be dragged from the first two monitors to the third monitor. Also, maximizing in the left or center window fills both monitors. I also tried adding Option "Xinerama" "true" to the ServerLayout section. X11 wasn't able to start up. I want to: Allow moving windows along all three monitors. Maximizing only fills the current monitor. Either have menu/task bars on only the left monitor or all three monitors How can I make this possible?

    Read the article

  • Specify default group and permissions for new files in a certain directory

    - by mislav
    I have a certain directory in which there is a project shared by multiple users. These users use SSH to gain access to this directory and modify/create files. This project should only be writeable to a certain group of users: lets call it "mygroup". During an SSH session, all files/directories created by the current user should by default be owned by group "mygroup" and have group-writeable permissions. I can solve the permissions problem with umask: $ cd project $ umask 002 $ touch test.txt File "test.txt" is now group-writeable, but still belongs to my default group ("mislav", same as my username) and not to "mygroup". I can chgrp recursively to set the desired group, but I wanted to know is there a way to set some group implicitly like umask changes default permissions during a session. This specific directory is a shared git repo with a working copy and I want git checkout and git reset operations to set the correct mask and group for new files created in the working copy. The OS is Ubuntu Linux. Update: a colleague suggests I should look into getfacl/setfacl of POSIX ACL but the solution below combined with umask 002 in the current session is good enough for me and is much more simple.

    Read the article

  • courier-imap w mysql : share a folder between 2 virtual users on debian

    - by Michael
    Hi I have a working courier-imap server on my Debian Etch private server ; users are virtual, authentificatin goes through mysql. It's been working good for years. I would like to share an imap folder between 2 users. I thought I would just have to do something like this : cd path/to/mailusers/dir ln -s path/to/user1/maildir/.folder_to_be_synched path/to/user2/maildir/ After I entered the command, I found that user2 saw a new folder in its imap client, but the folder appeared empty. It is not a permission problem because all the virtual users have the same permissions on the file system. Any idea what I could do ? thanks

    Read the article

  • Create an alias in Windows XP

    - by Lord Torgamus
    Back in school, I used to have a .login file along the lines of alias ll = ls -l alias dir = ls -Fhl alias web = cd ~/public/public_www/development ... I'd like to do that sort of thing with my XP box here at work, but most of the resources I've found online seem fairly complicated and heavy-duty. Is there a way to do this that doesn't involve mucking about in the registry or running a large batch file? EDIT: My original reason for asking this was that I only need the command line for one command in one specific folder, and I wanted to be able to get to that folder quickly when I launched the command line. But the accepted answer for this question is so good that I decided to ask about my original issue as a separate question here.

    Read the article

  • How make rewrite rules relative to .htaccess file.

    - by Kendall Hopkins
    Current I have an .htaccess file like this. RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f [OR] RewriteCond %{REQUEST_URI} ^/(always|rewrite|these|dirs)/ [NC] RewriteRule ^(.*)$ router.php [L,QSA] It works create when the site files are in the document_root of the webserver (ie. domain.com/abc.php - /abc.php). But in our current setup (which isn't changeable), this isn't ensured. We can sometimes have arbitrary folder in between the document root and folder of the .htaccess file (ie. domain.com/something/abc.php - /something/abc.php). The only problem with is that is the second RewriteCond no longer works. Is there anyway to dynamically check if the accessed path by a path relative to .htaccess file. For Example: If I have a site where domain.com/rewrite/ is the directory of the .htaccess file. NOT FORCED TO REWRITE -> domain.com/rewrite/index.php FORCED TO REWRITE -> domain.com/rewrite/rewrite/index.php If I have a site where domain.com/ is the directory of the .htaccess file. NOT FORCED TO REWRITE -> domain.com/index.php FORCED TO REWRITE -> domain.com/rewrite/index.php

    Read the article

  • Apache 2.2/PHP is not loading the correct php.ini

    - by Jesper
    I have been struggling with this for days. I have a windows server 2008 where I have installed Apache 2.2, PHP and MySql. I kept getting the error that mysql_connect was an unknown function, so I followed all guides I could find but with no result. I have now noticed that phpinfo says that php.ini dir is C:\Windows. This is strange since I've written this in httpd.conf: PHPIniDir 'c:\PHP\' LoadModule php5_module "c:/PHP/php5apache2_2.dll" AddHandler application/x-httpd-php .php How do I get apache and php to read my php.ini and thereby include mysql?

    Read the article

  • google sitemap generator installation selinux

    - by adnan
    when i trying to install google sitemap generator i received this error Change security context of to system_u:object_r:httpd_modules_t install: WARNING: ignoring --context (-Z); this kernel is not SELinux-enabled Program files successfully copied. ./install.sh: line 488: 14284 Segmentation fault "$DEST_DIR/$BIN_DIR/$DAEMON_BIN" update_setting $update_setting_flags "apache_conf=$APACHE_CONF" "apache_group=$APACHE_GROUP" > /dev/null after choosing the submiting file settings i tried to unistall it & excute this getenforce try again but the same problem when i enter this dir /etc/sysconfig/selinux. it is not contain the selinux file my os centos 6 X86_64

    Read the article

  • Save and restore multiple layers within a Photoshop action that flattens

    - by SuitCase
    I'm editing comic pages with layers - "background", "foreground", "lineart" and "over lineart". I have a Photoshop action that includes a Mode-Bitmap command, which requires the image to be flattened. I need this part of the action because I use the Halftone Screen method of reducing the greyscale image to bitmap on the "background" layer, creating a certain effect. I am pretty sure there is no filter or anything else that gives the same effect. After the mode is changed to bitmap, my action changes things back to greyscale for further changes. This poses a problem. I only want to do the bitmap mode change on the background layer, and after I do the change I want to restore the layer structure as it was - with the foreground, lineart and over lineart layers back above the now-halftoned background. My current method of saving these layers and restoring them is clumsy. My action is able to automatically save the "foreground" layer by selecting it, cutting it, then pasting it back in after the mode changing is over. But, for the "ink" and "over ink" layers, I have to manually cut these layers, paste them into a new document, and later re-cut and re-paste after running my action. This is so clunky! What I would like to know is if it's possible to set aside my layers in an automated way, and then bring them back in, also in an automated way. An ugly (but functional) solution would be to replicate my actions of creating new documents and pasting them temporarily there, but I don't think Photoshop allows you to do things outside of your current document with an action. It seems to me that the only way to do what I want is to use the "hack" of incorporating the clipboard into the action as a clever hack, but that leaves me stuck as I have two more layers that can't fit onto that same clipboard. Help or suggestions would be appreciated. I can keep on doing it manually, but to have a comprehensive action would save me a ton of time.

    Read the article

  • mount dev, proc, sys in a chroot environment?

    - by Patrick
    I'm trying to create a Linux image with custom picked packages. I followed the guide here http://www.olpcnews.com/forum/index.php?topic=4766.0 However, when I tried to install some packages, it failed to configure due to missing the proc, sys, dev directories. So, I learned from other places that I need to "mount" the host proc, ... directories to my chroot environment. Though, I saw two syntax and am not sure which one to use. In host machine: mount --bind /proc <chroot dir>/proc and another syntax (in chroot envrionment): mount -t proc none /proc Which one should I use, and what are the difference? Edit: What I'm trying to do is to hand craft the packages I'm going to use on an XO laptop, because compiling packages takes really long time on the real XO hardware, if I can build all the packages I need and just flash the image to the XO, I can save time and space.

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >