Search Results

Search found 19191 results on 768 pages for 'core location'.

Page 579/768 | < Previous Page | 575 576 577 578 579 580 581 582 583 584 585 586  | Next Page >

  • Start daemon after specific samba share is mounted

    - by getack
    I axed this question on AskUbuntu, but it's not getting any traction from there... So I'll try here as well: I have a homebrew headless NAS running 12.04. In it I have a bunch of disks that are presented as a samba share thanks to Greyhole. If I want to do anything to the files within this share, I must do it through greyhole so that everything is updated properly. Thus, the share must be mounted locally and then accessed from there if I want to work on the files from the local machine. I do this mounting automatically thanks to these instructions. I also have Deluge installed that takes care of all my torrenting needs. Deluge's default download location is in this share, so that all the downloads are immediately available to the rest of the network. Obviously for everything to work, the share must be mounted, otherwise Deluge is going to have a problem downloading to it. The problem is, it seems like Deluge is starting before the shares are mounted when the system boots. So downloading/seeding does not continue automatically after boot. I have to log in and force a manual rescan and start on each torrent otherwise all the torrents just hangs on the error. Is there a way I can make deluge start after the shares got properly mounted? I looked into Upstart's emits functionality but I cannot seem to get it to work properly. Any advice?

    Read the article

  • Apache doesn't immediately notice a change in the document root

    - by Tom
    We use capistrano for website deployments and our Apache document root is a symlink to a particular code release. The deployment procedure switches the symlink from the old release to the new release as the final step of the deployment. We are migrating our webservers from real servers running RHEL 5.6 to Amazon EC2 virtual machines running Ubuntu 11.10 and the new servers are suffering from a problem where Apache doesn't immediately notice the change to it's document root when the symlink is switched. It can take a second or so (and I think I've even seen it take a couple of minutes). It's kind of like Apache has cached the physical path of the symlink for some time. Does anyone know some Apache settings I could look at to get it to "scan" for changes to it's served files quicker. Thoughts: I read that the disks on virtual machines are much slower (since they are network attached storage). Perhaps the filesystem cache somehow works differently too? If so, is there anything that can be done? The website runs PHP code. Perhaps there is some PHP config differences between RHEL and Ubuntu? I checked realpath_cache_ttl but both servers have it commented out: e.g. ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://www.php.net/manual/en/ini.core.php#ini.realpath-cache-ttl ;realpath_cache_ttl = 120 We do use the APC opcode cache but don't think it's the issue due to experimentation. The PHP code is in different file paths for each deployment and we ensure stat=1. Here is a similar question that is very interesting: 294107 - but doesn't provide an answer for me. One solution would be to reload Apache everytime we modify the document root symlink. I'll do this if we can't find another solution.

    Read the article

  • ProCurve network expansion

    - by Blue Warrior NFB
    I've hit a bit of a wall with our network scale-out. As it stands right now: We have five ProCurve 2910al switches connected as above, but with 10GbE connections (two CX4, two fiber). This fully populates the central switch above, there will be no more 10GbE Ethernet connections from that device. This group of switches is not stacked (no stack directive). Sometime in the next two or three months I'll need to add a sixth, and I'm not sure how deep of a hole I'm in. Ideally I'd replace the core switch with something more capable and has more 10GbE ports. However, that's a major outage and that requires special scheduling. The two edge switches connected via fiber have dual-port 10GbE cards in them, so I could physically put another switch on the far end of one of those. I don't know how much of a good or bad idea that would be though. Is that too many segments between end-points? Some config-excerpts: Running configuration: ; J9147A Configuration Editor; Created on release #W.14.49 hostname "REDACTED-SW01" time timezone 120 module 1 type J9147A module 2 type J9008A module 3 type J9149A no stack trunk B1 Trk3 Trunk trunk B2 Trk4 Trunk trunk A1 Trk11 Trunk trunk A2 Trk12 Trunk vlan 15 name "VM-MGMT" untagged Trk2,Trk5,Trk7 ip helper-address 10.1.10.4 ip address 10.1.11.1 255.255.255.0 tagged 37-40,Trk3-Trk4,Trk11-Trk12 jumbo ip proxy-arp exit

    Read the article

  • Debian/Redmine: Upgrade multiple instances at once

    - by Davey
    I have multiple Redmine instances. Let's call them InstanceA and InstanceB. InstanceA and InstanceB share the same Redmine installation on Debian. Suppose I would want to install Redmine 1.3 on both instances, how would I do that? After upgrading the core files I would have to migrate the databases. What I would like to know is: can I migrate all databases in a single action? Normally I would do something like: rake -s db:migrate RAILS_ENV=production X_DEBIAN_SITEID=InstanceA for each instance, but this would get tedious if you have 50+ instances. Thanks in advance! Edit: The README.Debian file that's in the (Debian) Redmine package states: SUPPORTS SETUP AND UPGRADES OF MULTIPLE DATABASE INSTANCES This redmine package is designed to automatically configure database BUT NOT the web server. The default database instance is called "default". A debconf facility is provided for configuring several redmine instances. Use dpkg-reconfigure to define the instances identifiers. But can't figure out what to do with the "debconf facility". Edit2: My environment is a default Debian 6.0 "Squeeze" installation with a default Redmine (aptitude install redmine) installation on a default libapache2-mod-passenger. I have setup two instances with dpkg-reconfigure redmine.

    Read the article

  • Weird Apache Crash (with Dump) zend_hash_find (), libphp5.so

    - by Jacob84
    To be honest I don't have experience working with Apache. I'm just putting the best of my intentions on solving this and don't know if I'm making it right. So any help will be greatly appreciated. We have a php page wich is throwing the following message in the browser: Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data. The logs from /var/log/httpd doesn't seem to help because It seems that the Apache is unable to write any information. So the exception or error is preventing the writing (maybe ocurring in some stage of the process that makes impossible to log?). I've read about the procedure to make dumps of the apache, and here we have the content: Reading symbols from /lib64/libgpg-error.so.0...(no debugging symbols found)...done. Loaded symbols for /lib64/libgpg-error.so.0 Reading symbols from /usr/lib64/php/modules/zip.so...(no debugging symbols found)...done. Loaded symbols for /usr/lib64/php/modules/zip.so Core was generated by `/usr/sbin/httpd'. Program terminated with signal 11, Segmentation fault. 0 0x00007fb828fff712 in zend_hash_find () from /etc/httpd/modules/libphp5.so Missing separate debuginfos, use: debuginfo-install httpd-2.2.15-15.el6.centos.1.x86_64 I've been looking in the PHP files and I haven't found any direct call to zend_hash_find (wich seems to be causing the error). I've been looking at Google but found nothing related. Can somebody please help? Is there any step that I need to accomplish to know more? Thanks a lot, as always!

    Read the article

  • LDAP not showing secondary groups

    - by Sandy Dolphinaura
    Currently, I have a LDAP server (running ClearOS if that makes any difference) containing a database of users. So, I went and setup LDAP on a couple of my debian VMs, using libpam-ldapd and I discovered this odd problem. My group/user mapping would show up when running getent group but the secondary groups would not show up when running id . Here is my /etc/nslcd.conf # /etc/nslcd.conf # nslcd configuration file. See nslcd.conf(5) # for details. # The user and group nslcd should run as. uid nslcd gid nslcd # The location at which the LDAP server(s) should be reachable. uri ldaps://10.3.0.1 # The search base that will be used for all queries. base dc=pnet,dc=sandyd,dc=me # The LDAP protocol version to use. #ldap_version 3 # The DN to bind with for normal lookups. binddn cn=manager,ou=internal,dc=pnet,dc=sandyd,dc=me bindpw Me29Dakyoz8Wn2zI # The DN used for password modifications by root. #rootpwmoddn cn=admin,dc=example,dc=com # SSL options ssl on tls_reqcert never # The search scope. #scope sub #filter group (&(objectClass=group)(gidNumber=*)) map group uniqueMember member

    Read the article

  • OS X can't copy files from Windows Home Server...over wifi.

    - by John Clayton
    I'm a brand spanking new user of OS X, coming from a lifetime of Windows use. I've been setting up my new Macbook Pro and have run into a very unusual problem. Over wifi, I am unable to copy files to or from my Windows Home Server. The problem seems to exist only over wifi, and only to WHS. Here are the details of my setup: 2010 Macbook Pro (Core i7), OS X 10.6.3 Windows Home Server PP3 (virtualized in XenServer 5.5) Windows 7 Ultimate x64 desktop Windows 7 Ultimate x64 in Boot Camp D-Link DIR-655 wireless N router Here is what I've done to narrow down the problem: Files copy fine from WHS to OS X when using gigabit ethernet Files copy fine from desktop to OS X when using gigabit ethernet Files fail to copy from WHS to OS X when using wifi (error -51) Files copy fine from desktop to OS X when using wifi Files copy fine from WHS to Boot Camp when using wifi Files copy fine from desktop to Boot Camp when using wifi From what I can tell, it seems to be some sort of issue between OS X and WHS, but I can't for the life of me see what would be different between shares on WHS and my desktop. They are both connected using smb://ADDRESS (I've tried both by IP and name). I can browse the shares on the WHS, but copying to OS X fails. I originally found the issue while installing VS2010 off an ISO from WHS, mounted to a Windows 7 VM using VMware Fusion. During the installation the VM was unusable - even the clock got behind the host be about 8 minutes. Once I plugged in the ethernet and disabled the wifi things picked up and finished quickly. The Fusion 3.1 RC is the only I think of that I installed that may have messed with the wifi driver. I've also tried resetting the wifi router, and have changed it from being G & N to N-only. Under Boot Camp I get similar speeds as my wife's N laptop. Any ideas? Thanks!

    Read the article

  • Networking Home Office

    - by Matt
    I'm in the process of building an office in my garden. It's about 25m away from my house. I'd like to run a wired network connection to the office. I'd rather not go down the powerline route, as speeds don't seem great, and I'm likely to want to be moving a lot of data around on the internal network. I have an electrician who is running armoured electrical cable to the office, and is providing conduit for me to run network cable. My questions are: 1) What type of cable to run 2) How I terminate/connect it at both ends I could get something like armoured cat6 utp solid core (like this: http://www.netstoredirect.com/cat6-cable/289166-external-armoured-cat6-utp-solid-cable-price-per-metre.html) which seems fairly robust, but then I have to terminate it. Additionally, where the cable enters my house, there is about another 15m to where my router is situated. I also read this artice: http://www.audioholics.com/audio-video-cables/bjc-cat-network-cable-quality-interview which scared me into realising I don't know what I'm doing!! particularly with termination. Or I could get an "cat6 external patch cable" (e.g http://www.netstoredirect.com/rj45-network-cables/239231-external-cat6-utp-ldpe-rj45-patch-leads.html) and run that in the conduit, and work out how to terminate it at the house end. At the office end I guess I can just plug it into a switch. Any help? Thanks

    Read the article

  • Ubuntu 11.10 Virtual-box Unity 3D not working

    - by naveen
    After struggling for four hours, I still cannot get Unity 3D of Gnome 3 to work on my VirtualBox - I have been pouring through Internet and forum posts but to no avail. Here's what I've done so far: VirtualBox 4.1.4r74921 on Windows 7 Installed Ubuntu Desktop 11.10 ( 32 bit ) Enabled 3D acceleration Allocated 1.5GB of RAM Allocated 50MB video memory (hope this is not the culprit) Installed Guest edition 4.1.4 Did apt-get update and apt-get upgrade Booted back in to Ubuntu - falls back to Unity 2D Shared folder, mouse integration all works, so guest edition is properly installed Tried the command and below is the output /usr/lib/nux/unity_support_test –p OpenGL vendor string: Mesa Project OpenGL renderer string: Software Rasterizer OpenGL version string: 2.1 Mesa 7.11 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: no GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no I am trying to find what the "no" means but cannot find any good answers. Inter Core i5 processor 4GB of RAM on the host Display adapter: NVIDIA GeForce 8400GS Is anyone else facing the same problem? If so, can you point me to a solution or any reference where I can find a solution?

    Read the article

  • How do I switch java versions to an earlier version in Fedora 17?

    - by JHutson456
    I just installed Fedora 17. I'm setting up the Android Build Environment and need Java. I downloaded and installed jdk-6u32-linux-amd64.rpm I ran java -version and it spit out the correct version. Well a day or two later i tried my first compile in Fedora 17 and it complained about java and failed. I ran java -version again and low and behold it spits out $ java -version java version "1.7.0_03-icedtea" OpenJDK Runtime Environment (fedora-2.1.fc17.7-x86_64) OpenJDK 64-Bit Server VM (build 22.0-b10, mixed mode) I'm stumped. I mean, i've run the update/upgrade commands since i installed but i didn't think that updated full version revisions... So, I ran alternatives --config java and that only gave me the java 1.7 version. While digging around more I discovered that the recommended version of Java for the build environment is jdk-6u27-linux-x64-rpm.bin so I downloaded that from here: Oracle Download When I ran: sudo sh jdk-6u27-linux-x64-rpm.bin it returned: Unpacking... Checksumming... Extracting... UnZipSFX 5.50 of 17 February 2002, by Info-ZIP ([email protected]). inflating: jdk-6u27-linux-amd64.rpm inflating: sun-javadb-common-10.6.2-1.1.i386.rpm inflating: sun-javadb-core-10.6.2-1.1.i386.rpm inflating: sun-javadb-client-10.6.2-1.1.i386.rpm inflating: sun-javadb-demo-10.6.2-1.1.i386.rpm inflating: sun-javadb-docs-10.6.2-1.1.i386.rpm inflating: sun-javadb-javadoc-10.6.2-1.1.i386.rpm Preparing... ########################################### [100%] package jdk-2000:1.6.0_32-fcs.x86_64 (which is newer than jdk-2000:1.6.0_27-fcs.x86_64) is already installed Done. so now I'm confused. I ran: alternatives --config java again but it's still only returning 1.7 so I don't know what to do.I want to end up with 6u27 as the installed and functional version of the JDK. Thank you.

    Read the article

  • I keep losing wireless connection

    - by posfan12
    I have a WRT54GL v1.1 wireless router and a WUSB54G v4 wireless adapter, both made by Linksys. The router is in the living room by the TV and the my computer is in the bedroom. My ISP is Brighthouse. Operating System Microsoft Windows 7 Home Premium 64-bit SP1 CPU Intel Core 2 Duo E6600 @ 2.40GHz 36 °C Conroe 65nm Technology RAM 3.00GB Single-Channel DDR2 @ 333MHz (5-4-4-14) Motherboard eMachines EMCP73VT-PM (CPU 1) 26 °C Graphics ASUS VS247 (1920x1080@60Hz) 767MB GeForce GTX 460 (nVidia) 43 °C Hard Drives 466GB Seagate ST350041 8AS SCSI Disk Device (SATA) 35 °C Optical Drives HL-DT-ST DVDRAM GH41N SCSI CdRom Device Audio High Definition Audio Device The problem is that my Internet connection will work fine for 15 minutes or so. Then the data will just stop flowing. Windows says I am still connected, and the systray icon still shows five bars. But Comodo Firewall will stop showing up and down traffic, and another of my systray applications complains about a lack of connection. What I usually do is either disconnect from the network manually, or unplug and re-plug the USB adapter. At which point the connection will work properly for another 15 minutes. I've tried unplugging my router for 30 seconds and letting it reboot. I've also tried looking for a newer driver for my adapter but I seem to have the latest version 3.1.3.0. This is a recent problem starting about a week ago. For the previous several months things were working just fine. I haven't made any changes to my system that I am aware of. The only thing I did was open my case to blow the dust out of it, then put everything back together. How do I fix this issue?

    Read the article

  • Sharing disk volumes across OpenVZ guests to reduce Package Management Overhead

    - by andyortlieb
    Is it feasible to create a single "master" OpenVZ guest who would only be used for package management, and use something like mount --bind on several other OpenVZ guests sort of trick them into using the environment installed by the master guest? The point of this would be so that users can maintain their own containers, and yet stay in sync with the master development environment, so they'll always have the latest & greatest requirements without worrying too much about system administration. If they need to install their own packages, could put them in /opt, or /usr/local (or set a path to their home directory)? To rephrase, I would like several (developer's, for example) OpenVZ guests whose /bin, /usr (and so on...) actually refer to the same disk location as that of a master OpenVZ guest who can be started up to install and update common packages for the environment to be shared by all of this group of OpenVZ guests. For what it's worth, we're running Debian 6. Edit: I have tried mounting (bind, and readonly) /bin, /lib, /sbin, /usr in this fashion and it refuses to start the containers stating that files are already mounted or otherwise in use: Starting container ... vzquota : (error) Quota on syscall for id 1102: Device or resource busy vzquota : (error) Possible reasons: vzquota : (error) - Container's root is already mounted vzquota : (error) - there are opened files inside Container's private area vzquota : (error) - your current working directory is inside Container's vzquota : (error) private area vzquota : (error) Currently used file(s): /var/lib/vz/private/1102/sbin /var/lib/vz/private/1102/usr /var/lib/vz/private/1102/lib /var/lib/vz/private/1102/bin vzquota on failed [3] If I unmount these four volumes, and start the guest, and then mount them after the guest has started, the guest never sees them mounted.

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr K.
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • Problem configuring php-fpm with nginx

    - by Nisanio
    First of all: I'm not an expert in configuring things. This is very new for me, so, my apologies in advance. At work we have a Centos server. The guy who worked here before installed nginx. We need to made a php site, so, obviously, I need to set up php and make it work with nginx. Making short a very long tale, I had to replace the nginx binary with a new one (because the older was compile without fast-cgi), and I had to recompile and install php (because the new version has fpm). Then I struggle with the config files, making this nginx.conf (not all the file) user php; location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; } and uncomment some parameters in php-fpm (to much to detail here, but the important is that group and user are "php") I never could start the php-fpm with the instructions of the book sudo /usr/sbin/php-fpm start But after look at the net, I found this sudo /usr/local/sbin/php-fpm --fpm-config=/usr/local/etc/php-fpm.conf This worked (I think) I restarted nginx. But... nothings happens with php... My calls to php files (via firefox) doesn't even appear in the log (/opt/nginx/logs/error.log) I'm really, really exhausted and lost... Could anyone help me, pleaaase.... :( Thanks in advance

    Read the article

  • When importing an Access table into Excel, a look-up column is showing all values as numbers

    - by user3651997
    I have a basic Access to Excel question that has me frustrated. I have two Access 2010 data tables. One is a list of managers. The primary key is a manager ID (which is an autonumber because managers can have the same name), and each row also has manager name, manager email, etc. The second data table is a list of departments. The primary key for each row is a unique department code, and the foreign key is a manager ID (autonumber). I used the Look-up Wizard to create this connection. However, Access does not show the manager ID in the foreign key location. It shows Manager Name like I requested when I used the Look-up Wizard. Now I am trying to import the second table (departments) into Excel 2010. I clicked import from Access, chose the Department table, and everything popped into Excel. BUT, the Manager Name column is showing Manager ID instead. So I have a list of numbers instead of names. How can I make Excel show what I see in Access? Thanks!

    Read the article

  • Clustering/load balancing for cluster unaware applications

    - by AaronLS
    Forgive me if I use any of these terms incorrectly. I am wondering if there is any kind of software that would allow my two "join" two computers together such that a cluster unaware application could utilize their combined computing resources? By "cluster unaware" I mean an application that isn't designed to share work across multiple services. My understanding is that clustering is enabled by the specific application by it's architecture, such that messaging with multiple instances of the application coordinate the sharing of work. Instead I am looking for something that enables clustering at the OS or virtualization level, so that any application could essentially be clustered. Failing that, I am also wondering about the following scenario: We have 3 different applications we will call A, B, and C. We have 2 single core computers. At any given time lets say that any combination of those applications will be CPU intensive. In cases where only 2 of those apps are very active, have one of them moved over to a different server. In a nutshell, some sort of dynamic automatic shuffling of the application's load. I have heard of virtual machines that can be migrated across physical machines while live, but I am wondering if this can be done automatically in response to an application's or VM's CPU activity?

    Read the article

  • Is it the address bus size or the data bus size that determines "8-bit , 16-bit ,32-bit ,64-bit " systems?

    - by learner
    My simple understanding is as follows. Memory (RAM) is composed of bits, groups of 8 which form bytes, each of which can be addressed ,and hence byte addressable memory. Address Bus stores the location of a byte of memory. If an address bus is of size 32 bits, that means it can hold upto 232 numbers and it hence can refer upto 232 bytes of memory = 4GB of memory and any memory greater than that is useless. Data bus is used to send the value to be written to/read off the memory. If I have a data bus of size 32 bits, it means a maximum of 4 bytes can be written to/read off the memory at a time. I find no relation between this size and the maximum memory size possible. But I read here that: Even though most systems are byte-addressable, it makes sense for the processor to move as much data around as possible. This is done by the data bus, and the size of the data bus is where the names 8-bit system, 16-bit system, 32-bit system, 64-bit system, etc.. come from. When the data bus is 8 bits wide, it can transfer 8 bits in a single memory operation. When the data bus is 32 bits wide (as is most common at the time of writing), at most, 32 bits can be moved in a single memory operation. This says that the size of the data bus is what gives an OS the name, 8bit, 16bit and so on. What is wrong with my understanding?

    Read the article

  • How do I resolve BSOD: PAGE_FAULT_IN_NONPAGED_AREA?

    - by Burnzy
    I have been trouble shooting this for a few days and cannot fix this anyhow. Computer specifications Mobo: ASUS Sabertooth X58 LGA 1366 Intel X58 SATA 6Gb/s USB 3.0 ATX Intel Motherboard CPU: Intel(R) Core(TM) i7 CPU 920 (Bloomfield) @ 2.67 ( no OC ) RAM: 6144MB RAM GPU: 2x NVIDIA GeForce GTS 250 1Go in SLI (sli is not enabled anyway at the moment anyway) Drives: OCZ RevoDrive OCZSSDPX-1RVD0120 PCI-E x4 120GB PCI Express MLC Internal SSD [RAID-0]. (I know this could potentilly cause trouble but I had the BSOD before using this drive) Seagate Barracuda 7200.11 ST31500341AS 1.5TB 7200 RPM 32MB Cache SATA 3.0Gb/s 3.5" Internal Hard Drive - Bare Drive Click here for a log of a crash I just had. Click here for a log of a crash I had 30 minutes later, note that it's another driver. Some info Occurence: It seems pretty random so far, haven't noticed any kind of pattern I tried: Windows memory diagnostic (went smoothly at 1066mhz) As I said, it was still happening on my HDD, so when I bought the revodrive I install a new OS on there and still got the error, I believed it happened and I had no drivers installed at that point (not 100% sure) Change the following registry value to 1 (true): HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SessionManager \MemoryManagement\ClearPageFileAtShutdown Tried to lower even more ram clock Made sure ram timing was set to recommended by manufacturer Verified if motherboard was in good physical condition (yes and its brand new) There is one thing to note, when I got the new motherboard, I installed the new drivers WITHOUT formatting and the I removed the motherboard drivers that I could remove from the control panel (pretty much the first things that have been installed). Could this cause an issue even ON THE OTHER drive (revodrive). Hopefully someone can help me, I am getting tired of this, spending so much money and cannot get this to work correctly. If you need any other information let me know, thank you!

    Read the article

  • nginx inserting extra characters in Multi-status reply body

    - by user125011
    Here's the setup. I've got one server running apache/php hosting ownCloud. Among other things, I'm using to do CardDAV contact syncing. In order to make things work with my domain I have an nginx server running on the frontend as a reverse-proxy to the ownCloud server. My nginx config is as follows: server { listen 80; server_name cloud.mydomain.com; location / { proxy_set_header X-Forwarded-Host cloud.mydomain.com; proxy_set_header X-Forwarded-Proto http; proxy_set_header X-Forwarded-For $remote_addr; client_max_body_size 0; proxy_redirect off; proxy_pass http://server; } } The problem is that when my phone does a PROPFIND on the server, nginx adds extra characters to the content body that throw the phone off. Specifically, it prepends d611\r\n at the front of the body and appends 0\r\n\r\n to the end of the content. (I got this from wireshark.) It also re-chunks the result. How do I get nginx to send the original content as-is?

    Read the article

  • Symbolic directory link shared in domain

    - by Sabre
    We have a file server that is 2008R2 STD, it is a member server in a 2008 AD. I need to relocate some of the files and directories and would like to do it behind the scenes more or less without impacting the users. (Reason for this is that some of the files, due to recent software changes, HAVE to be located locally on one of the workstations, but they can be accessed by other applications remotely.) So symbolic links seem the panacea here, I moved a directory to another network share in the same domain (Windows 7 professional), created a symlink to it in the location it used to be in, named it the same thing, and to the local user it seems almost transparent. I.E. When logged into the desktop of the file server, I can go to the directory, open the link, it leaps to the other share as if it were local, exactly what would be expected. Then I tried it from another client computer (Windows 7 professional as well), went through the normal provisioning of R2R and L2R with fsutil... No joy. What I am getting is an access denied "Logon failure: Unknown username or bad password." using the same account that I log on locally to the file server with (Which happens to be the domain admin) So I cannot believe it is telling the truth, or... I assume it is not passing the credentials I am connecting to the first share all the way through the symlink. The end result is I want users on the domain to browser to share A, inside share A is a mixture of directories/files that reside there, and symlinks to directories/files on the second machine over the network in the same domain. Possible? Or am I misunderstanding how the symlink should work?

    Read the article

  • Symbolic link not allowed or link target not accessible

    - by TK Kocheran
    I can't seem to get a symlink working in my Apache VirtualHost, no matter what I try and I see the following error in the error log: Symbolic link not allowed or link target not accessible: /var/www/carddesigner I can browse the actual symlink from Linux with no problems whatsoever: $ ls -l /var/www | grep "carddesigner" lrwxrwxrwx 1 rfkrocktk rfkrocktk 64 2011-02-28 16:52 carddesigner -> /home/rfkrocktk/Documents/Projects/Work/carddesigner/build/main/ Additionally, I've made sure that the my VirtualHost allows the FollowSymLinks option: /etc/apache2/sites-enabled/000-localhost: <VirtualHost 127.0.0.1:80> ServerAdmin ########## DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Deny from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel debug CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> RewriteEngine On RewriteLog "/var/log/apache2/mod_rewrite.log" RewriteLogLevel 9 </VirtualHost> I can't seem to find any other configuration files that seem to override this and/or prevent symlinks from being loaded. Any ideas? Here are my permissions on the actual referenced files: $ ls -l ~/Documents/Projects/Work/carddesigner/build/main total 12 drwxrwxrwx 5 rfkrocktk rfkrocktk 4096 2011-02-28 16:11 advanced drwxrwxrwx 2 rfkrocktk rfkrocktk 4096 2011-02-28 16:10 core drwxrwxrwx 2 rfkrocktk rfkrocktk 4096 2011-02-28 16:10 simple Seems like the permissions are good to go, right?

    Read the article

  • .htaccess error "not allowed here" for all for all instructions

    - by andres descalzo
    I am using Debian Lenny and Apache 2. I changed the default .htaccess file with: AllowOverride AuthConfig But I always get the error message not allowed here when putting any instructions in the .htaccess file. EDIT: file default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/ <Directory /> Options FollowSymLinks Order allow,deny Allow from all AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks Includes #AllowOverride All #AllowOverride Indexes AuthConfig Limit FileInfo AllowOverride AuthConfig Order allow,deny Allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> .htaccess: #Options +FollowSymlinks # Prevent Directoy listing Options -Indexes # Prevent Direct Access to files <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> # SEO URL Settings RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] PHP info: apache2handler Apache Version = Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny10 with Suhosin-Patch Apache API Version = 20051115 Server Administrator = webmaster@localhost Hostname:Port = hw-linux.homework:80 User/Group = www-data(33)/33 Max Requests = Per Child: 0 - Keep Alive: on - Max Per Connection: 100 Timeouts = Connection: 300 - Keep-Alive: 15 Virtual Server = Yes Server Root = /etc/apache2 Loaded Modules = core mod_log_config mod_logio prefork http_core mod_so mod_alias mod_auth_basic mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_cgi mod_deflate mod_dir mod_env mod_mime mod_negotiation mod_php5 mod_rewrite mod_setenvif mod_status

    Read the article

  • Rails/Mongo across multiple different geo-regions

    - by wmarbut
    I have a system that by necessity requires physical presence in three or more different locations and I need advice on structuring in such a way that my database stays replicated in a timely manner without horrible latency. I've seen mysql access and replication be incredibly slow when the application server was trying to talk to a node that wasn't physically collocated. In this case I am using mongodb. The stack is linux/passenger/ruby/rails/mongodb. The database is write heavy and read light. The infrastructure is Amazon EC2 The application layer must be physically located in 3 or more different locations. I can't justify this requirement further than it is a requirement. The database, however needn't be located in more than one location if it can be written to quickly from other locations. From reading mongo's documentation, mongo replication seems like more of a candidate than sharding b/c my datastore is not huge. However I don't see anything that addresses the issue of speed for servers communicating across large distances with potentially high latency.

    Read the article

  • How to connect through a proxy using Remote Desktop?

    - by scottmarlowe
    So I've got a home server running Windows Server 2003. I use a dual network card setup and Routing and Remote Access to link the internal, private network to the external connection. The external connection hooks directly to my cable modem (so no routers or other devices sitting between). The problem I'm having is that I can't connect remotely from a location outside the house (so connecting to the server's external connection) to the server using either Remote Desktop or VNC. I have enabled both ports in Routing and Remote Access's firewall to allow access, and I have enabled Remote Desktop in Windows Server 2003. The odd thing is that I can access my home server's SVN repository and I can even ping the server's IP. I am using the IP to attempt to connect, though I use a dyndns.com provided name to connect to my SVN repository, so it shouldn't make a difference (I know the IP is getting resolved correctly). Any ideas on where to start diagnosing this one? I haven't seen anything in my server's event log. If any other info is needed, let me know. Thanks. UPDATE: One last piece of information: We use a proxy server at work, which I'm nearly 100% sure is the culprit. I have a workaround--if I connect to our VPN (even though I'm already inside the building) I am able to connect to my home server. This is with VNC. However, is there a way to connect through a proxy using Remote Desktop? ONE MORE UPDATE: Indeed, it was the http proxy I'm sitting behind at work that was causing the issue. An acceptable workaround is to use my VPN connection to bypass the proxy, and I'm in!

    Read the article

  • Segmentation fault on login to mysql

    - by numberwhun
    Hello everyone! I recently did a fresh install of Ubuntu on my laptop (HP dv7, AMD Dual Core with 4 gigs RAM). I am working on installing my development environment and tools and one of the first things I was working on is getting MySQL installed. The following was my configure statement with options: ./configure --prefix=/usr/local/mysql --with-big-tables --with-unix-socket-path=/usr/local/mysql/tmp/mysql.sock --with-named-curses-libs=/lib/libncurses.so.5.7 After I did the make;make install, I did the post configuration such as setting the root password and installing the mysqld daemon in its rightful place. My issue is when I try to log in to mysql to start using it, the following shows what happens: $ mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.1.42 Source distribution Segmentation fault I have searched Google extensively, I have searched through the mysql bugs database and I have yet to find anything that matches my issue. Here is the contents of my my.cnf file, in case you want to see it: $ cat /etc/my.cnf [mysqld] basedir=/usr/local/mysql datadir=/usr/local/mysql socket=/usr/local/mysql/tmp/mysql.sock [mysql.server] user=mysql #basedir=/var/lib [client] socket=/usr/local/mysql/tmp/mysql.sock [mysqld_safe] err-log=/usr/local/mysql/logs/mysqld.log pid-file=/var/run/mysqld/mysqld.pid I am really hoping that someone here can tell me what has gone wrong with my installation as I would really love to know. I welcome and look forward to all responses. Thank you in advance! Best regards, Jeff

    Read the article

< Previous Page | 575 576 577 578 579 580 581 582 583 584 585 586  | Next Page >