Search Results

Search found 18728 results on 750 pages for 'setup deployment'.

Page 238/750 | < Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >

  • ZFS: Mirror vs. RAID-Z

    - by John Clayton
    I'm planning on building a file server using OpenSolaris and ZFS that will provide two primary services - be an iSCSI target for XenServer virtual machines & be a general home file server. The hardware I'm looking at includes 2x 4-port SATA controllers, 2x small boot drives (one on each controller), and 4x big drives for storage. This allows one free port per controller for upgrading the array down the road. Where I'm a little confused is how to setup the storage drives. For performance, mirroring appears to be king. I'm having a hard time seeing what the benefit would be of using RAIDZ over mirroring would be. With this setup I can see two options - two mirrored pools in one stripe, or RAIDZ2. Both should protect against 2 drive failures, and/or one controller failure...the only benefit of RAIDZ2 would be that any 2 drives could fail. The storage should be 50% of capacity in both cases, but the first should have much better performance, right? The other thing I'm trying to wrap my mind around is the benefit of mirrored arrays with more than two devices. For data integrity what, if any, would be the benefit of a RAIDZ over a three-way mirror? Since ZFS maintains file integrity what does RAIDZ bring to the table...doesn't ZFS's integrity checks negate the value of RAIDZ's parity?

    Read the article

  • Windows Server 2008 network speed slow, Xen 3.4.3 HVM ISO

    - by Elliot.Bradshaw
    I've setup a VM running Windows Server 2008 on a host node running Xen 3.4.3-5 and the following kernel: 2.6.18-308.1.1.el5xen #1 SMP Wed Mar 7 05:38:01 EST 2012 i686 i686 i386 GNU/Linux The network speed on the VM is very slow--using the online speed tests I can only get it up to 8-9mbps. The line is 100mbps burstable and the host node has no problem achieving those speeds. If it setup a VM running CentOS, it too has no problems achieving those speeds. I've done some pretty exhaustive troubleshooting, but nothing has helped: New VM installations of Win2k8 do have the same network problem. Upgrading to most recent kernel-xen did not help (2.6.18-308.1.1.el5xen). Upgrading from xen 3.4.0 to xen 3.4.3-5 did not help. Disabling Windows firewall, etc did not help. Changing network card device config from auto negotiation to manually be 100mbps full duplex did not help. Changing the network receive buffer packet size did not help (tried all combos from 64k to 8k). At this point I'm pretty much out of ideas--any help would be appreciated!

    Read the article

  • Debian/Redmine: Upgrade multiple instances at once

    - by Davey
    I have multiple Redmine instances. Let's call them InstanceA and InstanceB. InstanceA and InstanceB share the same Redmine installation on Debian. Suppose I would want to install Redmine 1.3 on both instances, how would I do that? After upgrading the core files I would have to migrate the databases. What I would like to know is: can I migrate all databases in a single action? Normally I would do something like: rake -s db:migrate RAILS_ENV=production X_DEBIAN_SITEID=InstanceA for each instance, but this would get tedious if you have 50+ instances. Thanks in advance! Edit: The README.Debian file that's in the (Debian) Redmine package states: SUPPORTS SETUP AND UPGRADES OF MULTIPLE DATABASE INSTANCES This redmine package is designed to automatically configure database BUT NOT the web server. The default database instance is called "default". A debconf facility is provided for configuring several redmine instances. Use dpkg-reconfigure to define the instances identifiers. But can't figure out what to do with the "debconf facility". Edit2: My environment is a default Debian 6.0 "Squeeze" installation with a default Redmine (aptitude install redmine) installation on a default libapache2-mod-passenger. I have setup two instances with dpkg-reconfigure redmine.

    Read the article

  • Hints on diagnosing performance issue in OpenBSD firewall

    - by Tom
    My OpenBSD 4.6 pf firewall has started having really bad performance in the past few weeks. I've isolated the firewall (as opposed to the WAN connection, switch, cable, etc.) as the problem, but need a hint on how to further diagnose or fix the problem. The facts: Normal setup is: DSL Modem - FW Ext. NIC - FW Int. NIC - Switch - Laptop Normal setup described above gives only 25 Kbps! Plugging the laptop straight from the DSL modem gives a 1 MBps connection (full speed, as advertised). Therefore, the DSL connection seems to be OK. Plugging the laptop directly into the firewall's internal NIC (bypassing the switch) also gives only 25 Kbps. Therefore, the switch does not seem to be a problem. I've replaced the ethernet cables, but it didn't help. Here's the weird thing. Reloading the ruleset (/sbin/pfctl -Fa -f /etc/pf.conf) causes the laptop's connection to go up to 1 Mbps (i.e. full speed) for a few minutes before it gradually degrades back down to 25Kbps again. Any ideas on what's wrong or how I could further diagnose the problem?

    Read the article

  • Squid - Active Directory - permissions based on Nodes rather than Groups

    - by Genboy
    Hi, I have squid running on a gateway machine & I am trying to integrate it with Active Directory for authentication & also for giving different browsing permissions for different users. 1) /usr/lib/squid/ldap_auth -b OU=my,DC=company,DC=com -h ldapserver -f sAMAccountName=%s -D "CN=myadmin,OU=Unrestricted Users,OU=my,DC=company,DC=com" -w mypwd 2) /usr/lib/squid/squid_ldap_group -b "OU=my,DC=company,DC=com" -f "(&(sAMAccountName=%u)(memberOf=cn=%g,cn=users,dc=company,dc=com))" -h ldapserver -D "CN=myadmin,OU=Unrestricted Users,OU=my,DC=company,DC=com" -w zxcv Using the first command above, I am able to authenticate users. Using the second command above, I am able to figure out if a user belongs to a particular active directory group. So I should be able to set ACL's based on groups. However, my customer's AD setup is such that he has users arranged in different Nodes. For eg. He has users setup in the following way cn=usr1,ou=Lev1,ou=Users,ou=my,ou=company,ou=com cn=usr2,ou=Lev2,ou=Users,ou=my,ou=company,ou=com cn=usr3,ou=Lev3,ou=Users,ou=my,ou=company,ou=com etc. So, he wants that I have different permissions based on whether a user belongs to Lev1 or Lev2 or Lev3 nodes. Note that these aren't groups, but nodes. Is there a way to do this with squid? My squid is running on a debian machine.

    Read the article

  • A Domain Admin user doesn't have effective Administrative rights on a Domain Computer

    - by rwetzeler
    I am a developer who is setting up a virtual domain environment of testing purposes and am having trouble with the setup. I have created a new DC on a new Forest... call it dev.contoso.com. I have setup a virtual internal network for all machines that are going to be apart of this virtual test environment and have given each machine a static IP address in the 192.169.150.0 subnet. I have added machine1.dev.contoso.com to the domain dev.contoso.com. I have also provisioned a user account (adminuser) in the domain and made that user a member of Domain Admins group. Upon logging into machine1 using my newly created Domain Admin account, I cannot access/run any files on machine1. When I go into the advanced permissions for the c:\ folder and goto properties - Security Tab - Advanced - Effective Permissions and search for the dev\adminuser (mentioned above), I get an error saying: Windows can't calculate the effective permissions for admin user What do I need to do to get Administrative rights on Machine1? I am using Server 2008 R2 for both the AD controller and machine1.

    Read the article

  • Instabilities with Bridged and bonded interfaces

    - by Henry-Nicolas Tourneur
    I did post yesterday to get a working setup with several bridged interfaces used for virtual machines (KVM/libvirt). One of the bridged interface is just using eth3 as its ports while the second one (public traffic) is using an ethernet bonded interface. That setup is working but not all the time ! I can start a download from a vm, then it will stop and freeze! So I don't know if my bridge parameters are correct, could you check the below config ? iface eth3 inet manual auto bond0 iface bond0 inet manual slaves eth1 eth2 pre-up ip link set bond0 up down ip link set bond0 down auto br0 iface br0 inet static address 10.160.0.7 netmask 255.255.255.128 bridge_ports eth3 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp on auto br0:1 iface br0:1 inet static address 10.160.0.9 netmask 255.255.255.255 auto br0:2 iface br0:2 inet static address 10.160.0.10 netmask 255.255.255.255 auto br1 iface br1 inet static address 217.4.40.242 netmask 255.255.255.240 gateway 217.4.40.241 pre-up /etc/network/firewall start bridge_ports bond0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp on auto br1:1 iface br1:1 inet static address 217.4.40.252 netmask 255.255.255.255 auto br1:2 iface br1:2 inet static address 217.4.40.253 netmask 255.255.255.255 And yes, it also sometimes speaks about martian on the host: kernel: [249146.055172] martian source 10.160.0.17 from 10.160.0.10, on dev vnet2 kernel: [249146.073122] ll header: ff:ff:ff:ff:ff:ff:54:52:00:76:c3:5c:08:06

    Read the article

  • Use to host email for a domain name that wasn't our primary domain name

    - by drpcken
    Exchange 2007 on an Server 2003 active directory. My primary domain (MyMainDomain.com) controller also hosts dns and dhcp. I have a secondary domain name (MySecondDomain.net) that my Exchange Server allows emails from. It wasn't a physical domain, just accepted by exchange and setup as the Active Directory user's main smtp and outgoing address. Its MX records point to MyMainDomain.com's public exchange address. I've taken MySecondDomain.net and move the mail boxes to a hosted exchange 2010 environment. MX records now point to this new exchange system and when I send and email OUTSIDE the MyMainDomain.com environment (say gmail) it works and sends to the hosted exchange setup for MySecondDomain.net. however when I send an email from a user on MyMainDomain.com, it goes to the old exchange 2007 server I am hosting internally. I have removed MySecondDomain.net from the allowed domains, removed the DNS zone for MySecondDomain.net, and cleared DNS cache. I was convinced it was my internal dns server but I've cleared the DNS cache. Is there something I'm missing somewhere in exchange 2007? Or is it my domain controller/dns? Sorry if this is confusing. Thank you!

    Read the article

  • Configuring OS X L2TP VPN to use Certificate for IPSEC layer instead of Pre Shared Key

    - by Matthew Savage
    I'm trying to setup a L2TP VPN on an OS X Snow Leopard Server setup, and have had success using a pre-shared key, however I would rather not rely on a simple string, and use a certificate instead. Setting this up on the server side is seemingly easy, you simply select a certificate you have generated from the list, and hit apply, however when I try to use the certificate on the client side it fails. I have exported the certificate into a P12 file, and then transferred to the client, and imported into the login keychain, however when I try to choose the certificate (from Network preferences, clicking Authentication Settings, then selecting Certificate and pressing Select) I am shown the following error: No machine certificates found Certificate authentication cannot be used because your keychain does not contain any suitable certificates. Use Keychain Access to import the certificate into your keychain. If you do not have the certificates required for authentication, contact your network administrator. Unfortunately even when I try to generate a certificate where I override the defaults, ensure the DNS name etc are set properly this doesn't change. When I select Certificate Authentication for the User Auth, and click Select the certificate for the server shows up there, but obviously this isn't where I need it to be available.

    Read the article

  • Calendar sharing between unrelated Domains

    - by vlannoob
    I have a 'request' from one of the 'big guys' that has me scratching my head a bit. He is one of our Executives that also is a board member of several other organistations, so he floats around between 4 different unrelated companies, each with their own domains, Exchange setup etc. He has domain accounts in each organisation. He has an iPad with multiple Exchange accounts so he can see all his calendars which works Ok for him - Apple calendaring bugs/flaws aside. What he wants is the ability for 'reception staff' at each organisation to 'see' all his calendars as they are booking things for him in their respective organisations calendars without it conflicting with bookings made in his calendars by other organisations......you with me?? So for example: Company A books a meeting into his Comapny A calendar at 9am Monday and Company B books him a meeting in his Company B Calendar at 9:15am Monday on the other side of town and of course Company C has him booked in all day Monday on their Company C calendar. He gets all those on his iPad but he would like either a 'global' calendar all can see and book into or the ability for receptionists at Company's A,B and C to see all the Calendars to avoid these kind of conflicts. I told him to 'go away' straight off the bat, I don't control anything to do with the other companies or know their infrastructure. And quite frankly I don't want any part of it...but he's whining and he's high enough up the food chain that I can't ignore him forever. I'm open to suggestions. Is there any third party software/services that can facilitiate this kind of setup? I really don't want to be creating users in my AD structure to people not ion our organisation so they can get access to his calendar and I am sure there sysadmins feel the same. As usual - any advice is greatly appreciated ;)

    Read the article

  • MySQL -- enable connection to remote server via local /tmp/mysql.sock

    - by Kevin
    Hey all, I run a shared hosting provider and we're looking to move to a High Availability (replicated across multiple datacenters) setup for our hosting. We have created a replicated MySQL setup with failover that works wonderfully, and we'd like to move all of our clients' databases to it. The only trouble is that we have many many customers, all of whom have configured their Wordpress, Drupal, etc. installations to connect to MySQL via a local socket, not to the address of the remove server. I would hate to have to go through manually and change the connection statement in all of our clients' sites. What I'd ideally love to see is a program that listens on /tmp/mysql.sock and forwards connections there to the remote server I specify. I've seen SQL Relay, but it seems to require that I hardcode all of the database names and usernames and passwords into its configuration file. This is not going to work for me because our users add new databases dynamically all of the time, and I'd rather not have to write code to updated SQLRelay's config file every time. Does anyone have an idea on how to do this? Alternatively, I'd accept idea on how to handle this at the PHP level. (i.e. redirect any attempted calls to mysql_connect() to use that hostname rather than localhost) Thanks, Kevin

    Read the article

  • IIS 6 ASP.NET default handler-mappings and virtual directories

    - by mlauter
    I'm having a problem with setting a default mapping in IIS 6. I want to secure *.HTML files with ASP.NET forms authentication. The problem seems to have something to do with using virtual directories to hold the html files. Here's how it's setup: sample directory tree c:/inetpub/ (nothing in here) d:/web_files/my_web_apps d:/web_files/my_web_apps/app1/ d:/web_files/my_web_apps/app2/ d:/web_files/my_web_apps/html_files/ app1 and app2 both access the same html_files directory, so html_files is set as a virtual directory in the web apps in IIS... sample web directory tree //app1/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) //app2/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) If I put a file called test.html in the root of //app1/ and then add the default mapping to the asp.net dll and setup my security on the root folder with deny="?", then accessing test.html works exactly as expected. If I'm not authenticated, it takes me to the login.aspx page, and if I am authenticated then it displays test.html. If I put the test.html file in the html_files directory I get a totally different behavior. Now the login.aspx page loads and I stuck some code in to check if I was still authenticated: <p>autheticated: <%=User.Identity.IsAuthenticated%></p> I figured it would say false because why else would it bother to load the login page? Nope, it says true - so it knows i'm authenticated, but it won't give me access to the test.html file. I've spent several hours on this and haven't been able to solve it. I'm going to spend some more time on google to see if I've missed something. Fingers crossed.

    Read the article

  • Workstations cannot see new MS Server 2008 domain, but can access DHCP.

    - by Radix
    The XP Pro workstations do not see the new replacement domain upon boot; they only see their cached entry for the old (server 2003) domain controller. The old_server is not connected to the network. I have DHCP working with the same scope as the old_server. In my "before-asking" search for a solution I came across the following two articles, and I recall doing things as suggested by the articles. http://www.windowsreference.com/windows-server-2008/how-to-setup-dhcp-server-in-windows-server-2008-step-by-step-guide/ http://www.windowsreference.com/windows-server-2008/step-by-step-guide-for-windows-server-2008-domain-controller-and-dns-server-setup/ The only possible issue is: I was under the impression that the domain netbios needed to match the DC's netbios. The DC netbios is city01 while the domain's FQDN is city.domain.org (I think this is mistaken and should have been just domain.org) But, the second link led me to a post which I believe answers my question. I did as they instructed by opening Local Area Connection Properties, then selecting TCP/IPv4 and setting the sole preferred DNS server to the local hosts static IP (10.10.1.1). Search for "Your problems should clear up" for the post I'm referencing: http://forums.techarena.in/active-directory/1032797.htm Have I misunderstood their instructions? I am hoping to reach the point where I can define users and user groups. Also, does TechNet have a single theoretical overview document I could read. I really don't like treating comps as magic. I will be watching this closely and will quickly answer any questions. If I've left anything out it is because I did not know it was needed. PS: I am loath to ask obviously basic questions, but I am tired and wish to fix this before tomorrow. Also, this is my first server installation, thank you for your help.

    Read the article

  • VirtualHost not using correct SSL certificate file

    - by Shawn Welch
    I got a doozy of a setup with my virtual hosts and SSL. I found the problem, I need a solution. The problem is, the way I have my virtual hosts and server names setup, the LAST VirtualHost directive is associating the SSL certificate file with the ServerName regardless of IP address or ServerAlias. In this case, SSL on www.site1.com is using the cert file that is established on the last VirtualHost; www.site2.com. Is this how it is supposed to work? This seems to be happening because both of them are using the same ServerName; but I wouldn't think this would be a problem. I am specifically using the same ServerName for a purpose and I really can't change that. So I need a good fix for this. Yes, I could buy another UCC SSL and have them both on it but I have already done that; these are actually UCC SSLs already. They just so happen to be two different UCC SSLs. <VirtualHost 11.22.33.44:80> ServerName somename ServerAlias www.site1.com UseCanonicalName On RewriteEngine On RewriteOptions Inherit </VirtualHost> <VirtualHost 11.22.33.44:443> ServerName somename ServerAlias www.site1.com UseCanonicalName On SSLEngine on SSLCertificateFile /usr/local/apache/conf/ssl.crt/cert1.crt SSLCertificateKeyFile /usr/local/apache/conf/ssl.key/cert1.key SSLCertificateChainFile /usr/local/apache/conf/chain/gd_bundle.crt RewriteEngine On RewriteOptions Inherit </VirtualHost> <VirtualHost 55.66.77.88:80> ServerName somename ServerAlias www.site2.com UseCanonicalName On RewriteEngine On RewriteOptions Inherit </VirtualHost> <VirtualHost 55.66.77.88:443> ServerName somename ServerAlias www.site2.com UseCanonicalName On SSLEngine on SSLCertificateFile /usr/local/apache/conf/ssl.crt/cert2.crt SSLCertificateKeyFile /usr/local/apache/conf/ssl.key/cert2.key SSLCertificateChainFile /usr/local/apache/conf/chain/gd_bundle.crt RewriteEngine On RewriteOptions Inherit </VirtualHost>

    Read the article

  • Dynamic nginx domain root path based on hostname?

    - by Xeoncross
    I am trying to setup my development nginx/PHP server with a basic master/catch-all vhost config so that I can created unlimited ___.framework.loc domains as needed. server { listen 80; index index.html index.htm index.php; # Test 1 server_name ~^(.+)\.frameworks\.loc$; set $file_path $1; root /var/www/frameworks/$file_path/public; include /etc/nginx/php.conf; } However, nginx responds with a 404 error for this setup. I know nginx and PHP are working and have permission because the localhost config I'm using works fine. server { listen 80 default; server_name localhost; root /var/www/localhost; index index.html index.htm index.php; include /etc/nginx/php.conf; } What should I be checking to find the problem? Here is a copy of that php.conf they are both loading. location / { try_files $uri $uri/ /index.php$is_args$args; } location ~ \.php$ { try_files $uri =404; include fastcgi_params; fastcgi_index index.php; # Keep these parameters for compatibility with old PHP scripts using them. fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # Some default config fastcgi_connect_timeout 20; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_pass 127.0.0.1:9000; }

    Read the article

  • Re-configure Office 2007 installation unattended: Advertised components --> Local

    - by abstrask
    On our Citrix farm, I just found out that some sub-components are "Installed on 1st Use" (Advertised), which does play well on terminal servers. Not only that, but you also get a rather non-descriptive error message, when a document tried to use a component, which is "Installed on 1st Use" (described on Plan to deploy Office 2010 in a Remote Desktop Services environment): Microsoft Office cannot run this add-in. An error occurred and this feature is no longer functioning correctly. Please contact your system administrator. I have ~50 Citrix servers where I need to change the installation state of all Advertised components to Local, so I created an XML file like this: <?xml version="1.0" encoding="utf-8"?> <Configuration Product="ProPlus"> <Display Level="none" CompletionNotice="no" SuppressModal="yes" AcceptEula="yes" /> <Logging Type="standard" Path="C:\InstallLogs" Template="MS Office 2007 Install on 1st Use(*).log" /> <Option Id="AccessWizards" State="Local" /> <Option Id="DeveloperWizards" State="Local" /> <Setting Id="Reboot" Value="NEVER" /> </Configuration> I run it with a command like this (using the appropriate paths): "[..]\setup.exe" /config ProPlus /config "[..]\Install1stUse-to-Forced.xml" According to the log file, the syntax appears to be accepted and the config file parsed: Parsing command line. Config XML file specified: [..]\Install1stUse-to-Forced.xml Modify requested for product: PROPLUS Parsing config.xml at: [..]\Install1stUse-to-Forced.xml Preferred product specified in config.xml to be: PROPLUS But the "Final Option Tree" still reads: Final Option Tree: AlwaysInstalled:local Gimme_OnDemandData:local ProductFiles:local VSCommonPIAHidden:local dummy_MSCOMCTL_PIA:local dummy_Office_PIA:local ACCESSFiles:local ... AccessWizards:advertised DeveloperWizards:advertised ... And the components remain "Advertised". Just to see if the installation state is overridden in another XML file, I ran: findstr /l /s /i "AccessWizards" *.xml Against both my installation source and "%ProgramFiles%\Common Files\Microsoft Shared\OFFICE12\Office Setup Controller", but just found DefaultState to be "Local". What am I doing wrong? Thanks!

    Read the article

  • Which IP should I use for my nameserver

    - by Luke Bream
    Sorry to ask whats probably very obvious question. I have just got a new server that is fantastically cheap but unfortunatly doesnt come with any technical support and Im very out of my depth ! My hosting company has provided the following information... Below you will find your additional IP addresses added to the server 5.9.36.51 Please note that you can use the subnet only for this server. IP: 5.9.225.64 /27 Mask: 255.255.255.224 Broadcast: 5.9.225.95 Useable IP addresses: 5.9.225.65 to 5.9.225.94 It has cPanel with WHM and im going through the setup... I have a number of questions... my domian is purchased from godaddy and I want to use it as the name server. Question 1: Which IP or IP's do I enter into the godaddy interface for ns1.mydomian.com ns2.mydomain.com Question 2: In the WHM nameserver setup what do I enter for... Please enter an IP address for each of your nameservers. ns1.mydomain.com ?????? ns2.mydomain.com ?????? Add "A Entries" for Hostname IP for Entry: ????????? Thanks for aany help you can give Luke

    Read the article

  • ownCloud WebDAV interface seems to be broken

    - by Nobleleader13245
    I've been trying to host ownCloud on my server but everytime I try to it tells me this : Your web server is not yet properly setup to allow files synchronization because the WebDAV interface seems to be broken. Please double check the installation guides. This is my setup : Windows Server 2012 R2 IIS 8.5 PHP 5.5.11 ownCloud 6.0.3 MySQL 5.6.17 I tried google the error but I can't seem to find anything usefull. Some say I should try if this works : [hostname]/remote.php/webdav/ and yes I can navigate to this folder and I can open files from there. The calendar works and I can also just upload files the web version of ownCloud the only thing that doesn't seem to work is the sync client. The sync client doesn't say anything it just doesn't connect (Screenshot : http://prntscr.com/3p2apz) This is the error log : Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:56:00+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:47+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:34+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:34+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:37+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:51:24+00:00 This is my php.ini : http://pastebin.com/es3MB8Uh Does anyone have any idea on how I should get this to work? I've been trying to get this to work for about 14 days now and it starts to annoy me =P

    Read the article

  • ownCloud WebDAV interface seems to be broken

    - by Nobleleader13245
    I've been trying to host ownCloud on my server but everytime I try to it tells me this : Your web server is not yet properly setup to allow files synchronization because the WebDAV interface seems to be broken. Please double check the installation guides. This is my setup : Windows Server 2012 R2 IIS 8.5 PHP 5.5.11 ownCloud 6.0.3 MySQL 5.6.17 I tried google the error but I can't seem to find anything usefull. Some say I should try if this works : https://cloud.mcsoftworks.net/remote.php/webdav/ and yes I can navigate to this folder and I can open files from there. The calendar works and I can also just upload files from here https://cloud.mcsoftworks.net/ the only thing that doesn't seem to work is the sync client. The sync client doesn't say anything it just doesn't connect (Screenshot : http://prntscr.com/3p2apz) This is the error log : Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:56:00+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:47+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:34+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:34+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:37+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:51:24+00:00 This is my php.ini : http://pastebin.com/es3MB8Uh Does anyone have any idea on how I should get this to work? I've been trying to get this to work for about 14 days now and it starts to annoy me =P

    Read the article

  • SQL Server Installaion error 0x84B40000

    - by Kurtevich
    I have a problem installing SQL Server 2008 R2. Long time ago I had it installed, and then uninstalled. It was left in "Add/remove programs", but I didn't pay attention on that. I had 2005 installed. And now there is a need to install 2008. I removed 2005 and started installing 2008, but it says that space on C: is not enough. That's when I found out that "Add/remove programs" shows it occupying more than 4 gigabytes, though I used to uninstall it. So I click "Remove", it shows all those many screens and validations, shows that removal completed, but the size of Program Files folder is still more than 4 GB. I removed (from "Add\remove programs" everything that had "SQL Server" in it's name, but that main "SQL Server 2008" item is still there and still 4 GB and uninstalling does nothing. Because installation of SQL Server did not show existing instances, and I don't see any running services related to SQL server (well, almost any, more details in the end), I though that this folder contains just some leftover staff and data and deleted it manually. Then agreed to removing of the item in "Add/remove programs" and everything looks clean. Now every time I try to install SQL Server (even in the minimum configuration), I receive the following error: SQL Server Setup has encountered the following error: The specified credentials that were provided for the SQL Server service are not valid. To continue, provide a valid account and password for the SQL Server service. Error code 0x84B40000. What is this service mentioned here? This error looks like I'm trying to add features to existing server and it can't login. But the setup didn't ask me for any credentials, except one username that couldn't be changed. Here are the services shown that can be related, both disabled and pointing to non-existing executables: SQL Active Directory Helper Service SQL Full-text Filter Daemon Launcher (MSSQLSERVER) I understand that this must be because of my manual deletion, but is there a way to clean it up now?

    Read the article

  • New monitor connected to HDMI adaptor doesn't show output after booting

    - by Paul
    Hello out there in the multiple monitors’ world. I am a very old newbie in your world and need help. I just purchased a new Asus VH236H monitor and hooked it up the HDMI port of an ATI Radeon HD4300 / 4500 Series display adaptor. I left the old Princeton LCD19 (TMDS) hooked up to the DVI port of the same display adaptor. Both monitors displayed the boot sequence, after I fired good old Sarastro2 (Asus P5Q Pro Turbo – Dual Core E5300 – 2.60 GHz) up. The Asus lacked one half of a second behind the Princeton until the Windows 7 Ultimate SP 1 boot up was complete. Then the Asus displayed “HDMI NO SIGNAL” and went into hibernation. The Princeton stayed lit up as before. Both monitors are displayed on the “Screen Resolution Setup Display” and I plaid around with them for a while. The only thing I accomplished was to shove the desktop icons from the Princeton to the still hibernating Asus. The “Multiple displays:” is set to “Extend these displays”, the Orientation is “Landscape” and the Resolutions are set on both to the “recommended” one. Both monitors show that they work properly in the advanced Properties display. What am I doing wrong, what am I missing? Never mind the opinions about the different resolutions of the two monitors. I always can unhook the Princeton and give it to a Goodwill Store if I do not like the setup. I just would like to make it work. Any constructive help is very much appreciated, Thank you.

    Read the article

  • Router and Switch VLAN Configuration for Isolated Network

    - by Ben
    I haven't worked with VLANs much in the past and I was hoping if I could get a good explanation of what I need to setup for this to work. I have a Netgear WNR2000v2 router and a Netgear GS108T smart switch currently in my network. The fourth port on the router connects to port one on the switch. I would like to be able to isolated port 8 on the switch for use as a "guest port" when I bring home malware infested PCs for repair. I figured the VLAN capabilities of the GS108T would be able to do this for me, but I think I have a misunderstanding of how the VLAN actually works. Port 8 needs internet access but should not be able to communicate with the rest of the PCs on the home network. The subnet for the home network is 192.168.1.0/24 and I would like the guest PC to have A) 192.168.1.64 or B) 192.168.2.2. I am reading a lot of stuff about port trunking and VLAN membership, but I am confused as to which setup needs to be in place to make this work. Any help is greatly appreciated! Let me know if there is more information I need to provide. Definitely looking to learn something from this project. Thanks!

    Read the article

  • How do I SSH tunnel using PuTTY or SecureCRT through gateway/proxy to development server?

    - by DAE51D
    We have some unix boxes setup in a way that to get to the development box via ssh, you have to ssh into a 'user@jumpoff' box first. There is no direct connection allowed on 'dev' via ssh from anywhere but 'jumpoff'. Furthermore, only key exchange is allowed on both servers. And you always login to the development box as 'build@dev'. It's painful to always do that hopping. I know this can be done with SOCKS or a Tunnel or something... I have setup a FreeBSD VM and I can get things to work awesome using unix ssh tools. Basically all I do is make sure my vm's ~/.ssh/id_rsa.pub key is on both jumpoff and dev and use this ~/.ssh/config file: # Development Server Host ext-dev # this must be a resolvable name for "dev" from Jumpoff Hostname 1.2.3.4 User build IdentityFile ~/.ssh/id_rsa # The Jumpoff Server Host ext Hostname 1.1.1.1 User daevid Port 22 IdentityFile ~/.ssh/id_rsa # This must come below all of the above Host ext-* ProxyCommand ssh ext nc $(echo '%h'|cut -d- -f2-) 22 Then I just simply type "ssh ext-dev" and I'm in like Flynn. The problem is I can't get this same thing to work using either PuTTY or SecureCRT -- and to be honest I've not found any tutorials that really walk me through it. I see many on setting up some kind of proxy tunnel for Firefox, but it doesn't seem to be the same concept. I've been messing with various trial and error most all day and nothing has worked (obviously) and I'm at the end of my ssh knowledge and Google searching. I found this link which seemed to be perfect, but it doesn't work for me. The "Master" connects fine, but the "client" portion doesn't connect. It tells me, the remote system refused the connection. http://www.vandyke.com/support/tips/socksproxy.html I've got the VM, PuTTY and SecureCRT all using the same public/private key pairs to make things consistent and easier to debug. Does anyone have a straight up example of how to do this in Windows?

    Read the article

  • Problem linking two Cisco routers with a static route

    - by Chris Kaczor
    I'm trying to link two Cisco routers with a static route and I haven't been able to get it working as expected. Here is the basic setup: Router 1 - WRV210 - 192.168.1.1 - connected to cable modem Router 2 - RV120W - 192.168.2.1 I already have several machines on Router 1 that are working and I want to setup Router 2 with a few other machines on the different subnet. Here is what I've configured: Connected the WAN port on Router 2 to a LAN port on Router 1 Configured Router 1 to give 192.168.1.2 to Router 2 via DHCP Configured Router 1 with a static route (192.168.2.0 mask 255.255.255.0) to 192.168.1.2 using the LAN & Wireless interface Disabled the firewall on Router 2 (since it is covered by Router 1) Configured Router 2 to "Router" mode instead of "NAT" mode Configured Router 2 with a static route (192.168.1.0 mask 255.255.255.0) to 192.168.1.1 using the WAN interface From the research I've done I think that should be enough but things aren't working exactly as expected: Router 2 can ping 192.168.1.1 and 192.168.1.101 (a machine on router 1) A machine on Router 2 can ping 192.168.1.1 and 192.168.1.101 (a machine on router 1) ping 192.168.1.1 and 192.168.1.101 (a machine on router 1) Router 1 can NOT ping 192.168.2.1 or 192.168.2.101 (a machine on router 2) A machine on Router 1 can NOT ping 192.168.2.1 or 192.168.2.101 (a machine on router 2) can NOT ping 192.168.2.1 or 192.168.2.101 (a machine on router 2) Router 1 and a machine on Router 1 can ping 192.168.1.2 (Router 2 itself) I'm confused as to why Router 1 cannot talk to the 192.168.2.0/255.255.255.0 subnet. Any help would be greatly appreciated.

    Read the article

  • Can't add service account to domain group during SQL cluster install

    - by Sam
    I'm installing a 2008 instance on a Server 2003 machine which is already running SQL 2005. I need to set up domain groups for the security setup step: http://msdn.microsoft.com/en-us/library/ms179530.aspx On Windows Server 2003, specify domain groups for SQL Server services. All resource permissions are controlled by domain-level groups that include SQL Server service accounts as group members. Much more info on this here: http://support.microsoft.com/kb/910708 I've had problems with being able to add the windows service accounts to the groups at install time. The security admins had to make my account a domain admin - which they were hesitant to do. The account under which SQL Server Setup is running must have permissions to add accounts to the domain groups. Is there a specific security setting which would allow my account to add accounts to a group? UPDATE: I'm looking for specific instructions. I have a global group called domain\servicegroup - what do I tell the security folks to do. I'd love to figure it out myself, but I don't have access to this stuff.

    Read the article

< Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >