Search Results

Search found 15543 results on 622 pages for 'memory editing'.

Page 407/622 | < Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >

  • MultiPath configuration on RHEL5 and Clariion CX-300

    - by Kamil Z
    I have problem with discovering my FC-connected CX-300 storage. Frankly speaking I'm complete novice in FibreChannel, so step by step explanation would be appreciated. My configuration consist of two IBM HS20 blades with RHEL5.4 on board and 2x Qlogic ISP2422-based 4Gb Fibre Channel HBAs on each blade. As a FC switch there are two Brocades built in BladeCenter Chassis, and finally there is EMC Clariion CX-300. CX300, and Brocade switches should be configured properly, because they were working fine with previous configuration, which main defference was RHEL3 instead RHEL5.4 Below there is my output from several usefull commands: #lspci | grep Fibre 06:01.0 FibreChannle: Qlogic Corp. ISP2422-based 4Gb Fibre Channel to PCI-X HBA (rev 02) 06:01.1 FibreChannle: Qlogic Corp. ISP2422-based 4Gb Fibre Channel to PCI-X HBA (rev 02) #lsmod | grep qla qla2xxx 1084741 0 scsi_transport_fc 37577 1 qla2xxx scsi_mod 141717 10 scsi_dh,qla2xxx,sg,scsi_transport_fc,usb_storage,libata,mptspi,mptscsih,scsi_transport_spi,sd_mod #cat /proc/scsi/scsi Attached Devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: LSILOGIC Model: 1030 IM IM Rev: 1000 Type: Direct-Access ANSI SCSI revision: 02 Host: scsi0 Channel: 01 Id: 00 Lun: 00 Vendor: IBM-ESXS Model: ST936701LC FN Rev: B418 Type: Direct-Access ANSI SCSI revision: 04 Host: scsi0 Channel: 01 Id: 00 Lun: 00 Vendor: IBM-ESXS Model: ST936701LC FN Rev: B418 Type: Direct-Access ANSI SCSI revision: 04 I'd followed instructions from this site (editing /etc/multipath.conf), but i failed after multipath -ll - the output was empty. Do you have any suggestions about discovering FC Connected LUNs in such configuration?

    Read the article

  • JBossMQ - Clustered Queues/NameNotFoundException: QueueConnectionFactory error

    - by mfarver
    I am trying to get an application working on a JBoss Cluster. It uses Queues internally, and the developer claims that it should work correctly in a clustered environment. I have jbossmq setup as a ha-singleton on the cluster. The application works correctly on whichever node currently is running the queue, but fails on the other nodes with a: "javax.naming.NameNotFoundException: QueueConnectionFactory not bound" error. I can look at JNDIview from the jmx-console and see that indeed the QueueConnectionFactory class only appears on the primary node in the Global context. Is there a way to see the Cluster's JNDI listing instead of each server? The steps I took from a default Jboss 4.2.3.GA installation were to use the "all" configuration. Then removed /server/all/deploy/hsqldb-ds.xml and /deploy-hasingleton/jms/hsqldb-jdbc2-service.xml, copying the example/jms/mysq-jdbc2-service.xml file into its place (editing that file to use DefaultDS instead of MySqlDS). Finally I created a mysql-ds.xml file in the deploy directory pointing "DefaultDS" at an empty database. I created a -services.xml file in the deploy directory with the queue definition. like the one below: <server> <mbean code="org.jboss.mq.server.jmx.Queue" name="jboss.mq.destination:service=Queue,name=myfirstqueue"> <depends optional-attribute-name="DestinationManager"> jboss.mq:service=DestinationManager </depends> </mbean> </server> All of the other cluster features of working, the servers list each other in the view, and sessions are replicating back and forth. The JBoss documentation is somewhat light in this area, is there another setting I might have missed? Or is this likely to be a code issue (is there different code to do a JNDI lookup in a clusted environment?) Thanks

    Read the article

  • Linux Distro for Beginners

    - by XLR3204S
    Well... I know that's the question arising all over the Internet, but I couldn't find an answer to suit me after googling for quite some time. I'd like to get a Linux distribution, and start learning using the CLI. I'm looking for a distribution already having GNOME installed, as I'll be using Linux-Command.org as my learning resource, and I'm not very familiar with CLI-based web browsers. I'd mainly like to get to know my way around a UNIX-based system, and then I think I'd like to pick up a CLI-only distribution, and start doing more complex stuff. I've tried Ubuntu, Fedora Core, OpenSolaris and FreeBSD (the last two aren't linux distros, I know). Ubuntu and FC are fine, they do come with Firefox, but I'm not really sure they're meant for learning purposes. OpenSolaris was OK as well, but I haven't got to play with it enough. FreeBSD 7.2 did not want to install itself on my 13" MacBook Pro, it generated a kernel panic everytime while copying the files to the disk. So to sum this up, I'm trying to learn Linux, and I'm willing to invest time into this (that is, not giving up when the first problems arise). I also have intermediate knowledge of C++, if it helps, and I'm also using the CLI-vim to write small C++ CLI-based programs, so text editing should be any problem. And... speaking of Macs, how am I going to be limited if I try to learn how to use UNIX-based systems using the OS X Terminal? It uses bash 3.2, isn't this the same shell as the one found on most of the Linux machines? How does the fact that OS X is based on FreeBSD 4.4, if I'm not mistaking, affect this? Thanks in advance, and hopefully, I'll have a starting point ASAP.

    Read the article

  • What's the right way to create a Ubuntu user whose home directory is /var/www/SITE?

    - by Leonnears
    First of, I need to state I'm a complete ignorant when it comes to server administration on Ubuntu, and I'm doing what I can. I have been trying to do this for hours with no luck. Basically, I want to create a Ubuntu user whose home directory is /var/www/SITE, and prefered it is chroot'd to it. The chroot part is not so important right now, as first I prefer to make anything work. The user should be able to upload files here and the webserver (www-data user?) should be able to pick them up with no problem. I was able to create the user and give it the home directory /var/www/SITE. (the user is "anders"). I gave him a password, and "anders" can connect to FTP just fine and upload files. But here's where things don't work: While my user can upload files to that /var/www/SITE directory, when I access the webpage on my browser I get a Forbidden error. Note that anders is also a member of the www-data group. I can fix this by running sudo chmod g+s /var/www/SITE/* anders -R but this is of course not ideal. Ideally the files should "work" as soon as I upload them. What's the right way to fix this? If it matters (don't think so), I'm editing my files in Coda 2 and anders is the user for it.

    Read the article

  • How to deal with DELL support system?

    - by Nishant Kumar
    We have purchased a Dell Optiplex 9010 SSFV for our organization's work. Since the first installation two of the USB keyboard keys were not working properly. I had to press those keys two times simultaneously, on first time keys did not work and for for second time it printed two characters (as it were buffering first character.) Two keys that were not working properly: Hexangrave (Below the ESC key: `) Double Quotes (Left the enter key ") We registered our complaint with DELL and they suggested (with some hard to understand and weird ENGLISH accent) some test and tricks, such as switching to different ports, checking keyboard on different PC, and it worked well with diff. PC(with Windows 7 Home Premium installed). It was clear that it is an OS fault, hence they suggested to re-install OS. Problem began here, we have a project on the run and currently a video editing project setup on our system, so can't re-install system in hurry and also DELL persons were not providing any other solution such as updating keyboard driver, etc. Arguments I am a Software Engg. and don't think it is a feasible solution to re-install entire system for simple problems. This prob is coming since the fresh system installation, so I don't think it will solve the problem. Finally, I had to find solution myself and got it here, now I want to show my disappointment to dell persons or at least tell them that they should improve there support system to not advice to re-install entire system for that simple problems. Notes We have purchased 5 years NEXT business day support from DELL for around 8000 INR (Not for that kind of solutions from DELL). It is Dell India Support System. So can anyone tell me how to tackle dell support system officially, so that they will pay more attention in near future. Thanks

    Read the article

  • Black screen appears when booting new install of Ubuntu 11.10 on my desktop, cannot access Grub menu to fix

    - by izn
    I installed 11.10 on my desktop PC but get a black screen after the BIOS screen when I try to boot it. I was able to run 10.04.04 on my hard drive before installing 11.10 and I am also able to use 11.10 on my usb pendrive and CD ROM. I've tried unplugging all USB devices before booting and also upgrading from 11.10 to 11.10. Holding the shift key from the BIOS screen doesn't allow me to access the GRUB menu to try: Highlight the first entry, press “e” to edit it. Navigate to words “quiet splash”, delete them and type “nomodeset” in their place (without quotes). Press Ctrl + X to continue boot. Once on the desktop, go to System Administration Additional Drivers and activate the recommended drivers. So running 11.10 on my pendrive, I tried editing /etc/default/grub, commenting out the GRUB_HIDDEN_TIMEOUT setting by putting a '#' in front of it to display the grub menu and setting GRUB_TIMEOUT setting to a value greater than or equal to 1 e.g. GRUB_TIMEOUT=10. However, when I run sudo update-grub, I get: /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?) I get the same error with update-grub after: sudo mount /dev/sda1 /mnt and after: sudo grub-install --root-directory=/mnt /dev/sda reboot sudo update-grub Other suggestions to fix the update-grub problem: Open synaptic, then purge all the related grub installed packages and reinstall grub-pc then and finally: sudo update-grub Or use Grub Customizer http://ubuntuforums.org/showthread.php?t=1195275 What would be the best way to approach this? I'm concerned about purging "all the related grub installed packages" but if it's true some files are corrupted this would seem necessary. Also, was I executing the correct commands i.e. with mount and grub-install, before running grub-update?

    Read the article

  • localhost won't load after adding config data to httpd

    - by OldWest
    I am not very experienced with configuring httpd, and I am following a tutorial to view my site w/ domain name under localhost. My localhost just blanks out and my apache services won't restart. I checked all of my paths and they are correct. I am editing the w*indows/system32/drivers/etc/host*s file and my apache httpd file. This is what I am putting in my hosts file: 127.0.0.1 www.cars_v1.0.com.localhost And in the footer of my httpd file I am putting this: <VirtualHost 127.0.0.1:80> ServerName www.cars_v1.0.com.localhost DocumentRoot "C:\wamp\www\symfony\cars_v1.0\web" DirectoryIndex index.php <Directory "C:\wamp\www\symfony\cars_v1.0\web"> AllowOverride All Allow from All </Directory> Alias /sf C:\wamp\www\symfony\cars_v1.0\lib\vendor\symfony-1.4.8\data\web\sf <Directory "C:\wamp\www\symfony\cars_v1.0\lib\vendor\symfony-1.4.8\data\web\sf"> AllowOverride All Allow from All </Directory> </VirtualHost>

    Read the article

  • How To Fix Samba File Permission Issues in Mac OSX

    - by user1867768
    I've had this problem for a long time, here is the basics of it... I use a mixed environment of Windows 7/8 computers with Mac OSX Lion/Mountain Lion. Whenever a Windows computer creates a file on a SMB share on the Mac it no longer has group permissions, only the person who created or updated it can access it. My solution has been to go onto the Mac system and reset permissions for the entire directory structure then everyone can see it again. About the only thing on this that I can find was for OSX pre Snow Leopard that mentioned editing the SMB.CONF file to fix their particular problem (similar to mine, http://www.gladsheim.com/blog/2009/09/19/osx-leopard-and-samba-permissions/). The problem is that now Lion and Mountion Lion no longer have an SMB.CONF file (another web search pointed to the com.apple.smbd.plist (http://kidsreturn.org/?s=smb.conf) but it's an XML file now and I'm not clear on what should be done to THAT to fix the problem. So, short of me writing an Applescript to run every hour to fix permissions, does anyone know a solution to this very frustrating problem? Thank you in advance for any advice or solutions you can offer!

    Read the article

  • Whats the difference between local and remote addresses in 2008 firewall address

    - by Ian
    In the firewall advanced security manager/Inbound rules/rule property/scope tab you have two sections to specify local ip addresses and remote ip addresses. What makes an address qualify as a local or remote address and what difference does it make? This question is pretty obvious with a normal setup, but now that I'm setting up a remote virtualized server I'm not quite sure. What I've got is a physical host with two interfaces. The physical host uses interface 1 with a public IP. The virtualized machine is connected interface 2 with a public ip. I have a virtual subnet between the two - 192.168.123.0 When editing the firewall rule, if I place 192.168.123.0/24 in the local ip address area or remote ip address area what does windows do differently? Does it do anything differently? The reason I ask this is that I'm having problems getting the domain communication working between the two with the firewall active. I have plenty of experience with firewalls so I know what I want to do, but the logic of what is going on here escapes me and these rules are tedious to have to edit one by one. Ian

    Read the article

  • Prestashop is not saving Memcached settings

    - by ianenri
    I have a issue with the admin site of prestashop. I'm trying to activate the caching system but it doesn't save the setting. When I add a server (I'm using an Amazon ElasticCache server) saves it, but when I select the enable option and click Save, it redirects me to "Back Office Preferences Performance" but with a blank page and the admin tabs visible. I go back to those settings again and i see that the caching option is disabled. Also, there is a warning: "To use Memcached, you must install the Memcache PECL extension on your server. http://www.php.net/manual/en/memcache.installation.php" , even if i already installed memcached via yum. I tried also, by modifying the settings.inc.php file editing define('_PS_CACHE_ENABLED_', '0'); to: define('_PS_CACHE_ENABLED_', '1'); But i get 500 Internal Server Error in every page, so i prefer to leave it as before. Any ideas? I'm using PrestaShop 1.4.6.2 with Nginx 1.0.11 and PHP-FPM 5.3.8 in a CentOS 5.7 system.

    Read the article

  • Setting up dnsmasq for a local network

    - by WishCow
    Me, and a small group of developers have just moved to a new office, and I'd like to set up dnsmasq on our development server, so when we deploy web apps there, we don't have to edit our own hosts files. We have a router at 192.168.3.1 which we don't have access to. I figured I'd install a DNS server on the development box, and we all record it's IP as a secondary DNS server. Unfortunately I'm strugling to make this work. The name of the devel server is devbox, it's IP is 192.168.3.99, and it's running the latest Ubuntu Server (Karmic) My computer is running Ubuntu Desktop (Karmic) What I'd like to achieve Let's say I have three websites, website1, website2, website3, running on the development box. I'd like to access them by the urls: http://website1.devbox http://website2.devbox http://website3.devbox So I have configured Apache on the devel box, installed dnsmasq, and put the following lines into it's hosts file: 192.168.3.99 website1.devbox 192.168.3.99 website2.devbox 192.168.3.99 website3.devbox and edited my own resolv.conf file to include the devel box as a nameserver: nameserver 192.168.3.99 It's working fine, I can access the sites. The problem is that it doesn't scale well. I'd like all the domains ending with .devbox forwarded to the development box, and this is what I'm struggling with. I have tried putting 192.168.3.99 devbox into the hosts file, and editing the line in dnsmasq.conf: # Add local-only domains here, queries in these domains are answered # from /etc/hosts or DHCP only. local=/devbox/ But I cannot get it working. If I try any url that is not explicitly present in the development box's hosts file, the dns lookup fails. Is the local directive for something else? Am I looking at the wrong place?

    Read the article

  • Setting up dnsmasq for a local network

    - by WishCow
    Me, and a small group of developers have just moved to a new office, and I'd like to set up dnsmasq on our development server, so when we deploy web apps there, we don't have to edit our own hosts files. We have a router at 192.168.3.1 which we don't have access to. I figured I'd install a DNS server on the development box, and we all record it's IP as a secondary DNS server. Unfortunately I'm strugling to make this work. The name of the devel server is devbox, it's IP is 192.168.3.99, and it's running the latest Ubuntu Server (Karmic) My computer is running Ubuntu Desktop (Karmic) What I'd like to achieve Let's say I have three websites, website1, website2, website3, running on the development box. I'd like to access them by the urls: http://website1.devbox http://website2.devbox http://website3.devbox So I have configured Apache on the devel box, installed dnsmasq, and put the following lines into it's hosts file: 192.168.3.99 website1.devbox 192.168.3.99 website2.devbox 192.168.3.99 website3.devbox and edited my own resolv.conf file to include the devel box as a nameserver: nameserver 192.168.3.99 It's working fine, I can access the sites. The problem is that it doesn't scale well. I'd like all the domains ending with .devbox forwarded to the development box, and this is what I'm struggling with. I have tried putting 192.168.3.99 devbox into the hosts file, and editing the line in dnsmasq.conf: # Add local-only domains here, queries in these domains are answered # from /etc/hosts or DHCP only. local=/devbox/ But I cannot get it working. If I try any url that is not explicitly present in the development box's hosts file, the dns lookup fails. Is the local directive for something else? Am I looking at the wrong place?

    Read the article

  • Exim 4 Virtual Domains and Catchall on Debian (Squeeze)

    - by parazuce
    Hello, I've been at it for about 4 hours now. Searching as well as trying different tutorials. Here's my setup: I have 2 domains, both under my own DNS server (MX records setup as well). I have exim4 successfully running, and it is able to send messages from both of those domains. I have tested this using sendmail, and manually setting the "From" attribute. Exim successfully delivers mail to users no matter which domain was specified. I'm fine with that, but I'm having an issue editing virtual domains, and adding custom delivery options (such as a catch all). I've been searching for about 4 hours, and I can't find any up-to-date documentation on how to do this. The old methods would be to add a line such as: domainlist local_domains = @:localhost:dsearch;/etc/exim4/virtual Once that line was added, I made a directory at /etc/exim4/virtual, then created files inside such as example.com which would then contain rules for delivery under that domain. This did not work, however. Searching further, I've found that exim no longer supports dsearch (I guess because they claim it never has?) This is where I'm stuck. I'm on a "split" configuration as well.

    Read the article

  • Backup, Migrate or Clone Failing CentOS 4 (LVM)

    - by Hegelworm
    Hello there, I've been running a BlueQuartz CentOS 4 system (Nuonce.net distro) for a few years now and although the hard drive (Deskstar) has always been a bit noisy, on a few recent occasions I've heard it having trouble spinning up. Basically, I want to clone this drive to a similar sized one (80 Gig). I've spent many hours reading upon dd, dd_rescue, rsync, clonezilla and LVM mirroring yet the sheer number of options and nightmarish accounts has left me frozen - unable to make an informed decision as to how to start. I've made a few attempts. dd failed after about 2 hours, as, although the drives appeared to be identical on the surface (ATA Seagate Barracudas, Thai not Chinese), the destination drive is slightly smaller. My most recent attempt involved using a Debian CD to format the new drive and then rsync-ing everything over and editing the new drive's grub and fstab to reflect the changes. No joy here either as I hadn't chosen LVM when partitioning the destination drive and it wouldn't boot. As you can probably tell, I'm out of my depth here and a panic-invoking mixture of caution and frustration has prompted me to sign up here. The server itself, although not strictly a production environment, has a very specific installation of Festival, LAME and ffMpeg and provides the back-end for a Text-to-Speech jQuery plugin that I've built over the last 2 years. I'm also planning to rebuild the whole TTS system on Debian as the existing CentOS system still has PHP4 etc. For now though, I'd really like to just shift everything over to a new drive. As this is my first post, please feel free to lay any house rules on me that I might've overlooked; I've been hovering around StackOverflow for a while now but have only just signed up. Many thanks.

    Read the article

  • Error applying iptables rules using iptables-restore

    - by John Franic
    Hi I'm using Ubuntu 9.04 on a VPS. I'm getting an error if I apply a iptables rule. Here is what I have done. 1.Saved the existing rules iptables-save /etc/iptables.up.rules Created iptables.test.rules and add some rules to it nano /etc/iptables.test.rulesnano /etc/iptables.test.rules This is the rules I added *filter # Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT -i ! lo -d 127.0.0.0/8 -j REJECT # Accepts all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allows all outbound traffic # You can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allows HTTP and HTTPS connections from anywhere (the normal ports for websites) -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allows SSH connections # # THE -dport NUMBER IS THE SAME ONE YOU SET UP IN THE SSHD_CONFIG FILE # -A INPUT -p tcp -m state --state NEW --dport 22- j ACCEPT # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT After editing when I try to apply the rules by iptables-restore < /etc/iptables.test.rules I get the following error iptables-restore: line 42 failed Line 42 is COMMIT and I comment that out I get iptables-restore: COMMIT expected at line 43 I'm not sure what is the problem, it is expecting COMMIT but if COMMIT is there it's giving error. Could it be due to the fact i'm usin a VPS?My provider using OpenVZ for virtualizaton.

    Read the article

  • Weird .#filename files on remote ssh-connected systems after mcedit

    - by etranger
    I'm using MacFusion sshfs in combination with Midnight Commander, and when I edit remote text files with mcedit, weird symlinks are created on the remote system. $ ls -l .* lrwxr-xr-x 1 user group 34 Jun 27 01:54 .#filename.txt -> [email protected] where etranger is my local login name, and mbp is a hostname of my notebook running MacOS. symlinks can be removed by running remote rm command, but cannot be deleted on the mac-fuse mounted volume and thus pollutes the filesystem. I cannot figure what part of software is responsible for this, and how I could fix this, any help is appreciated. EDIT: This appears to be mcedit behavior as documented here: https://dev.openwrt.org/ticket/8245 Apparently, sshfs fails to remove symlink to the lock file for some reason (".#" in filename, perhaps), and it pollutes the filesystem. A quick workaround is possible, using another bug of Midnight Commander: editing (F4) the broken symlink effectively converts it to a missing lock file it was supposed to point to, and removes the symlink itself. The newly created file may then be deleted normally. EDIT 2: Unchecking "Follow symlink" in MacFusion apparently allows sshfs to remove dead symlinks, so the problem disappears completely.

    Read the article

  • Windows Server 2008 Remote Desktop printing blank pages

    - by Colin Pickard
    I have a Windows Server 2008 (not R2) machine which has problems with redirected printing. Clients connecting via Remote Desktop have their printers redirected and appearing for them to print to, but printing from applications on the server to local printers is giving blank pages, missing pages, or pages with headers/footers but no middle section. The issues are consistant for similar prints, but sometimes other prints and/or applications will work correctly. I have installed PDFCreator locally on the server, and the same print jobs sent by the same application appear correctly in the PDFs. Printing that PDF via the redirected printer prints correctly. I have tried the following: Installing drivers. I’ve installed several drivers different drivers, for both the client and server operating system and architecture, on the client and the server. Reinstalling the printers. I’ve tried reinstalling on remote print servers, the clients, and the host server, and tried different client machines. Granting everyone full permissions on the print spool folder on the server. Editing the registry to forward non-USB ports (http://support.microsoft.com/kb/302361) None of these have made any difference. The clients are using Windows 7 or Windows XP and none of them have any issues with printing locally. Any ideas? Thanks!

    Read the article

  • Windows 8 auto-hibernate from sleep not working on Retina MacBook Pro

    - by frenchglen
    I have a similar question to this one. Only my context is the 15" Retina MacBook Pro - and Windows 8. I have just the original Mac OS X Mountain Lion on there, then Windows 8 via Bootcamp. no rEFIt installed. (I just press ALT every time I restart windows, actually as a security measure to stop tech-unsavvy thugs, who, if the laptop is stolen, think it's only a mac and don't discover my Windows as quickly as they would've, and by that time I remotely activate various anti-theft mac apps and nab them that way). SO: like the related question asks, why isn't it behaving like it should? The Windows 7 FAQ states: Will sleep eventually drain my laptop battery? If your laptop battery charge gets critically low while the computer is asleep, Windows automatically puts the laptop into hibernation mode. But this is just not happening - on my rMBP Windows 8. It seems EVERY time I set the laptop to sleep (when it reaches 10%), then arriving home and plugging it in and hoping to simply resume my work, it does NOT save the session to disk and I lose ALL my work. Who's fault is it? Win 8's (a bug, grr)? Or Apple's EFI system (maybe fixable via editing EFI options/do I have to install refit to make it work perhaps?) Or maybe changing windows power options can somehow fix the problem? Thanks for your help.

    Read the article

  • how does svn work with apache?

    - by ajsie
    i've got ubuntu installed with lamp. im using webdav to upload/download files to/from the ubuntu web server, after i have edited the php source files in netbeans. however, i wonder what is best practice for editing source files and committing these changes to the new website. cause if we are 2-3 developers, i guess we have to use svn. but i have never used it before so i wonder how it works. should i install it and then select the /var/www (apaches webroot) as the repository folder? then when i check in, all the changes will apply immediately? could someone please explain following steps: how to download, edit the source files, upload the files and see the new changes in the website. cause i have only worked with a local apache before, and it was only me. now there will be some more programmers so i have to set up a decent, central environment for this, and have to know how netbeans, svn, webdav and apache works all together. thanks!

    Read the article

  • Project and Business Document Organization

    - by dassouki
    How do you organize, maintain edits, revisions and the relationship between: Proposals Contracts Change Orders Deliverables Projects How do you organize your projects for re-usability? For example, is there a way to add tags to projects, to make them more accessible? What's a good data structure to dump all my files on an internet server for easy access? Presently, my work folder is setup as follows: (1)/work/ (2)/projects (3)/project_a (4)/final (which includes all final documents) (5)/contracts (5)/rfp_rfq (5)/change_orders (5)/communications (logs all emails, faxes, and meeting notes and minutes) (5)/financial (6)/paid (6)/unpaid (5)/reports (4)/old (include all documents that didn't make it into the project_a/final/ (3)/project_b (4) ... same as above ... (2)/references (3)/technical_references (3)/gov_regulations (3)/data_sources (3)/books (3)/topic_based (each area of my expertise has a folder with references in them) (2)/business_contacts (3)/contacts.xls (file contains all my contacts) (2)/banking (3)/banking.xls (contains a list of all paid and unpaid invoices as well as some cool stats) (3)/quicken (to do my taxes and yada yada) (4)/year (2)/education (courses I've taken (3)/webinars (3)/seminars (3)/online_courses (2)/publications (includes the publications I've made (3)/publication_id We're mostly 5 people working together part-time on this thing. Since this is a very structured approach, I find it really difficult to remember what I've done on previous projects and go back and forth easily. What are your suggestions on improving my processes? I'm open to closed and open source software (as long as the price isn't too high). I also want to implement a system where I can save most of the projects online to increase collaboration and efficiency and reduce bandwidth especially on document editing. Imagine emailing a document back and forth 5-10 times a day.

    Read the article

  • How to prevent nginx from locking files on mounted samba partition in Centos 6

    - by Bruce Kirkpatrick
    I'm using nginx 1.3.8 inside a centos 6.3 virtualbox 4.2.4 virtual machine. The system is running the latest software available via yum update. The host OS is windows 7. The site files nginx is serving are on mounted samba partition, which is a folder on the host Windows system. I.e., inside linux, nginx paths are referring to /home/vhosts and this is mounted from D:\vhosts\ on windows. The samba partition is mounted as root with 777 privileges. /etc/fstab looks like this, but with real ip, username, password: //hostip/vhosts /home/vhosts cifs username=username,password=SECRETPASSWORD,uid=root,gid=root,file_mode=0777,dir_mode=0777,rw,_netdev 0 0 I.e. linux/nginx reads from the windows share, and not the opposite. in /etc/samba/smb.conf, I have tried to disable all samba locking features, but it seems to have no effect even after rebooting the virtual machine. locking=no share modes=no oplocks = no level2 oplocks = no kernel oplocks =no I'm receiving "Access is denied" errors in Windows or linux when attempting to overwrite the javascript file in windows that has been accessed at least once with nginx. If I run "service nginx reload", the lock is removed and I'm able to save the file. That's why I think it is nginx causing the lock. The same problem occurs with directories. However, that may be a different issue not related to the use of samba. I'm using samba so that I can manage the source code outside of the virtual machine. Also note that after I run "service nginx reload", the file I'm editing is actually automatically deleted from the windows host. SOLVED: I just reviewed my nginx.conf file. It appears the "open_file_cache" feature is what is causing the lock and deleted files. When I set this option to open_file_cache off;, My problem is resolved. I will repost as the answer when it allows me to do so.

    Read the article

  • How to tackle dell support system? [closed]

    - by Nishant Kumar
    We have purchased a Dell Optiplex 9010 SSFV for our organization's work. Since the first installation two of the USB keyboard keys were not working properly. I had to press those keys two times simultaneously, on first time keys did not work and for for second time it printed two characters (as it were buffering first character.) Two keys that were not working properly: Hexangrave (Below the ESC key: `) Double Quotes (Left the enter key ") We registered our complaint with DELL and they suggested (with some hard to understand and weird ENGLISH accent) some test and tricks, such as switching to different ports, checking keyboard on different PC, and it worked well with diff. PC(with Windows 7 Home Premium installed). It was clear that it is an OS fault, hence they suggested to re-install OS. Problem began here, we have a project on the run and currently a video editing project setup on our system, so can't re-install system in hurry and also DELL persons were not providing any other solution such as updating keyboard driver, etc. Arguments I am a Software Engg. and don't think it is a feasible solution to re-install entire system for simple problems. This prob is coming since the fresh system installation, so I don't think it will solve the problem. Finally, I had to find solution myself and got it here, now I want to show my disappointment to dell persons or at least tell them that they should improve there support system to not advice to re-install entire system for that simple problems. Notes We have purchased 5 years NEXT business day support from DELL for around 8000 INR (Not for that kind of solutions from DELL). So can anyone tell me how to tackle dell support system officially, so that they will pay more attention in near future. Thanks

    Read the article

  • Windows 7 - How to access my documents from Windows 8 (dual boot)

    - by msbg
    I am dual booting Windows 7 and Windows 8 on two different partitions of the same drive: Win7: (C:) Win8: (D:) I am trying to get access to my Win7 user folder (C:\Users\Mason) in order to access my Win7 documents folder (C:\Users\Mason\Documents) from Windows 8. When I try to on Windows 8, I get an error message saying "You don't have permission to access this folder. Click here to permanently get access to this folder". When I click, the progress bar in Windows Explorer slowly moves to the maximum and disappears. When I try opening the folder, I get the same error message. When editing security permissions for the folder in Windows 8, Explorer freezes. I do not know how to remove the restrictions from Windows 7. I checked the Windows 8 user folder (D:\Users\Mason) and it had the group or user name: "S-1-5-21-936898901-3363470404-1273668825-1001". I tried copying and pasting it into the Win7 User Folder Permissions, but got the error "An object with the following name cannot be found". How would I access my folders?

    Read the article

  • Permissions on mac for itunes library with multiple users - idea

    - by John
    I currently have a lot of music on an external drive and my itunes set up from there. However, periodically, when the external drive isn't connected, itunes will default back to the library location of my home directory user path. I don't want to mess with an external drive, as my mac HD is large enough to house the music collection. However, I have 4 family members - all with their own logins - using this same gob of music. I don't want 4 copies of the library, only one with all libraries referencing it. So, what I want to do is: 1 - move all music files to a shared directory at /Macintosh HD/users/music. I created this directory and adjusted permissions, so all four users can read and write to this directory. 2 - get all four accounts to reference this library instead of the external or local home locations I am hoping I can just check the box to keep library organized in my account, which is the admin and let itunes move it all. Then delete current libraries for each account and re-add from the new shared location. Will the itunes organization process cause permissions issues either by setting permissions to all the files access to my account only or write permissions or any other 'gotcha'? I am having a hard time coming up with a smooth solution that won't break everything and cause me to have mega duplicates or access issues. I would prefer not to do any xml library file editing if possible. Am I dreaming? Thanks for help.

    Read the article

  • Linking to network shares from Sharepoint pages

    - by Russell C
    So the place I work decided to set up a Microsoft Sharepoint 2010 server for task management and I (as the lowly entry-level intern) have been tasked with "figuring it out." One thing that the end users really, really, really, want is the ability to link to network shares (that are readable by anyone who will be using sharepoint) from a Sharepoint web page. In order to do this, I have edited the HTML manually with several lines that look like the following: <a href="file://server/share">Server Share</a> This works (sometimes) but the link reported by Sharepoint is often wrong and editing pages that contain these links will mangle the code such that when I open it, the code no longer looks like what it did when I last hit save (breaking all those links). Obviously this is not sustainable. I've been told by coworkers that "It worked that way at the last place I worked" but I haven't found out how yet. Any ideas on how this would work or am I barking up the wrong tree? None of the knowledge searches I've done shed any light on the sitataion. Thanks for any help! -Russell P.S. It should be noted that the file option in an href tag ONLY works in IE (which is a real bummer since we mostly use Firefox).

    Read the article

< Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >