Search Results

Search found 66534 results on 2662 pages for 'document set'.

Page 996/2662 | < Previous Page | 992 993 994 995 996 997 998 999 1000 1001 1002 1003  | Next Page >

  • Setting up multiple servers for one domain

    - by Joseph Torraca
    So I am starting up a new website and I was wondering how to set up 5 servers to host the site. I have already purchased 5 Apple XServes, one will be used as a test server and the other 4 will be for the live site. So I have read some website on the internet and they all reference using one server and installing software onto it and have that server do the load balancing. I have also read that you could use a hardware, rack-mounted system and plug the servers into that. The load balancer would then distribute the load. So I have a few questions about each: 1) How do you set up the software version and have the other servers as "slaves" and have one "master" to direct traffic? 2) Which of the two options above are more reliable, and better suited for a startup that doesn't have many users per month, yet(hopefully)? 3) Is there a theoretical max limit of servers that can be connected to a software load balancing system? Note: Obviously this will change from software to software, but in terms of the server being able to handle it? 4) In your own opinion, what are you using for your sites? Have you had any problems setting up that system or operating it once its running? Are there any things you would stay away from if you had to start over? 5) I also purchased a Apple RAID system, so if you are familiar with it, is there any way to connect it to multiple Xserves so they all serve the same data? I'm a little confused on this, so thanks for all your help and being patient with me. Note: Take it easy on me, I am learning this as I go along, so I may have used terms incorrectly or explained things that don't really make sense. Sorry. P.S. If you need me to supply the specs on the servers to determine which system makes the most sense, I can post them for you.

    Read the article

  • Clone virtual machine with Server 2008 R2 and Hyper-V?

    - by bwerks
    Hi all, I've recently just started working with Hyper-V, and so far it's quite nice. However, I've been running into problems with what seems like it should be the most basic of workflows. I've set up a baseline Server 2008 R2 configuration, and exported it with the intention of using the export for cloning. I entered "C:\Exports\" as the export folder. However, I run into problems when I try to import the image. From the Hyper-V manager, I select "Import Virtual Machine" and in the resulting window I entered "C:\Exports\BuildServer\" as the folder, set the radial to "Copy the virtual machine (create a new unique ID)" and checked the checkbox for "Duplicate all files so the same virtual machine can be imported again." Doing so results in the following error: "Import failed. Import task failed to copy file from 'H:\Exports\BuildServer\Virtual Hard Disks\BuildServer.vhd' to 'C:\Hyper-V\Virtual Hard Disks\BuildServer.vhd': The file exists. (0x80070050)" Have I somehow messed something up in configuration? Or is this a known thing? I've read it should be possible to clone VMs by copying them in the filesystem but I'd prefer to keep things in the management Ui if possible.

    Read the article

  • Is there a way to tell SGE to run specific jobs as root on the execution node?

    - by Rick Reynolds
    The title kinda says it all... We're using SGE/OGE to submit jobs to a set of worker nodes that then do things with specific pieces of equipment. The programs and scripts that have been created that manipulate this equipment rely on running as root. I'd like SGE to handle allocation of resources in a way that is mindful of users, groups, projects, etc., but I also need the actual jobs to run with root permissions. I've read up on How can one run a prologue script as root in gridengine? to see if anything there was pertinent, but it seems that SGE is providing the "user@" kind of spec specifically for prolog and epilog kinds of actions. Is there any similar functionality for the job itself? I'm aware of su/sudo approaches, but that won't really work in this environment because the sudoers file isn't globally managed (i.e. I'd have to add a whole set of users to /etc/sudoers on lots of machines). I'm currently looking into a setuid kind of solution, but that would definitely be an unnecessary kind of work-around if SGE provides me a way to declare that a specific job (or jobs in a specific queue) always needs to run with a specific user's rights.

    Read the article

  • Apache reaching MaxClients and locking the server

    - by Rodrigo Sieiro
    Hi. I currently have an Apache2 server running with mpm-prefork and mod_php on a OpenVZ VPS with 512M real / 1024M burstable RAM (no swap). After running some tests, I found that the maximum process size Apache gets is 23M, so I've set MaxClients to 25 (23M x 25 = 575 MB, ok for me). I decided to run some load tests on my server, and the results left me puzzled. I'm using ab on my desktop machine requesting the main page from a wordpress blog. When I run ab with 24 concurrent connections, everything seems fine. Sure, CPU goes up, free RAM goes down, and the result is about 2-3s response time per request. But if I run ab with 25 concurrent connections (my server limit), Apache just hangs after a couple of seconds. It starts processing the requests, then it stops responding, CPU goes back to 100% idle and ab times out. Apache log says it reached MaxClients. When this happens, Apache keeps itself locked up with 25 running processes (they're all in "W" if I check server status) and only after the TimeOut setting the processes start to die and the server starts responding again (in my case it's set to 45). My question: is that expected behaviour? Why Apache just dies when it reaches MaxClients? If it works with 24 connections, shouldn't it work with 25, just taking maybe more time to respond each request and queueing up the rest? It sounds kinda strange to me that any kid running ab can alone kill a webserver just by setting the concurrent connections to the servers MaxClients.

    Read the article

  • IIS 7 - The virtual path 'null' maps to another application, which is not allowed

    - by Miro
    I have run into issue when set up IIS 7 Farm for Load balancing. Add 4 server to IIS Farm with appropriate ports(8080,8081,8082,8083). Also add Inbound rule for IIS Farm. The Tomcat instances listens these ports. When i'm opening url(which i set on inbound rule), i got the following exception: The virtual path 'null' maps to another application, which is not allowed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [ArgumentException: The virtual path 'null' maps to another application, which is not allowed.] System.Web.CachedPathData.GetVirtualPathData(VirtualPath virtualPath, Boolean permitPathsOutsideApp) +8839122 System.Web.HttpContext.GetFilePathData() +36 System.Web.HttpContext.GetConfigurationPathData() +26 System.Web.Configuration.RuntimeConfig.GetConfig(HttpContext context) +43 System.Web.Configuration.CustomErrorsSection.GetSettings(HttpContext context, Boolean canThrow) +41 System.Web.HttpResponse.ReportRuntimeError(Exception e, Boolean canThrow, Boolean localExecute) +101 System.Web.HttpContext.ReportRuntimeErrorIfExists(RequestNotificationStatus& status) +538 How can i solve this issue?

    Read the article

  • Port forwarding + shared connection with Ubuntu

    - by Joey Adams
    Because my wireless router's ethernet ports are defective, I set up a shared wireless connection from my laptop (which has wifi) to my eMac (which does not) via a crossover ethernet cable. The laptop is behind a router as 192.168.1.131, and the eMac is behind the laptop as 10.42.43.1 . The laptop is running Ubuntu 9.10 (Karmic). I achieved the shared connection through NetworkManager Applet. I right-clicked on the network icon at the topright, went to Edit Connections, selected the Wired connection named "Auto eth0", clicked "Edit...", went to the "IPv4 Settings" tab, and selected the Method "Shared to other computers". The eMac can now access the Internet. Now I want to enable port forwarding. There's a game I want to play that needs port 6112 forwarded (both TCP and UDP) in order to host games. I set up the router to enable port forwarding for 192.168.1.131 (the laptop), but port forwarding still isn't available on the eMac. I suppose I need to pretend my laptop is a router and configure port forwarding on it, indicating that incoming connections to the laptop (192.168.1.131) should be forwarded to the eMac on the shared connection (10.42.43.1 ). Thus, packets coming into the router on port 6112 would be redirected to the laptop (by the router), then to the eMac (by the laptop). My question is, how would I do that on Ubuntu (in light of NetworkManager's presence)? Also, if I can't get this to work, does anyone mind hosting a comp stomp? :D

    Read the article

  • Xen hipervisor 4.1 Kernel Panic on Ubuntu 12.04

    - by rkmax
    I have a fresh Ubuntu 12.04.1 amd64 server install following this guide I have used LVM option used all disk and make 2 LV /dev/mapper/vg-root / (80GB) vg-swap swap (4GB) now i install xen with apt-get install xen-hypervisor-4.1-amd64 and config /etc/default/grub like the guide and add GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=768M" later all this i exec update-grub and reboot. but when i try to boot with Xen 4.1-amd64 always i get a kernel panic with the message Domain-0 allocation is too small for kernel image my questions are: this error is about what? where i can grow this allocation for avoid this error? grub.cfg menuentry 'Ubuntu GNU/Linux, with Xen 4.1-amd64 and Linux 3.2.0-29-generic' --class ubuntu --class gnu-linux --class gnu --class os --class xen { insmod part_gpt insmod ext2 set root='(hd0,gpt2)' search --no-floppy --fs-uuid --set=root 3541e241-7f39-4ebe-8d99-c5306294c266 echo 'Loading Xen 4.1-amd64 ...' multiboot /xen-4.1-amd64.gz placeholder dom0_mem=768M echo 'Loading Linux 3.2.0-29-generic ...' module /vmlinuz-3.2.0-29-generic placeholder root=/dev/mapper/backup--xen-root ro rootdelay=180 echo 'Loading initial ramdisk ...' module /initrd.img-3.2.0-29-generic } Note: I've followed this guide too

    Read the article

  • memcached append() php ubuntu - bad protocol

    - by awongh
    I am running ubuntu gutsy(7.1) , php5 and I am trying to get memcached running locally. I installed everything as per the docs: memcached daemon, php PECL extension, libevent, etc. But now I can only run half of the example script for memcached append(): <?php $m = new Memcached(); $m->addServer('localhost', 11211); $m->setOption(Memcached::OPT_COMPRESSION, false); $m->set('foo', 'abc'); $m->append('foo', 'def'); var_dump($m->get('foo')); ?> the script terminates @ append() with an RES_BAD_PROTOCOL error message. It still runs the get(). I don't know why memcached would otherwise be working fine (connect, set, get - with the correct value of 'abc') and not work for append. it also doesnt work with prepend. I believe I have the setup correct, but I am not sure. Maybe there are compatibility problems between the versions of the dependecies? thanks much

    Read the article

  • Can I change the "From" name and address without going into Mail.app's preferences?

    - by Arjan van Bentem
    Can I somehow add an editable "From" field to Mail.app, a bit like Virtual Identity for Thunderbird, just like I could change the "Reply To" address on the fly? In Mail.app, one can set up multiple email addresses for a single account by just entering a comma-seperated list like [email protected], [email protected]. Next, when composing a message, Mail offers a dropdown to select a "From" address. And when replying†, it automatically selects the right address if it can find a match: Nice, but I'd like to be able to change the "From" on the fly, without going into the Account Information. Also, in previous versions of Mail one could even specify multiple Full Names for a single account: Email Address: Arjan <[email protected]>, Arjan on SU <[email protected]> But nowadays, Mail only uses the setting of Full Name, and even ignores names in the Email Address field when no value for Full Name is set at all. Hence, it would be great if I could change the Full Name on the fly as well. I had no luck finding a plugin for Mail yet. † When using sub-addressing, anything that is sent to [email protected] is simply delivered to [email protected]. I'd then like to reply with the same full address, rather than just [email protected] if Mail cannot find a match, without going into the Account Information first. I sometimes also want to compose a new message with a new sub-addressing-address.

    Read the article

  • Server 2003, XP Clients, DNS issues

    - by ron
    Hello, Im having DNS issues on my network. My DC is my DNS server 10.76.4.11 and recently I configured a forwarder to 10.4.36.10. My workstations are not working because they cannot resolve the domain controller name because of DNS. an ipconfig /all reveals that they know the IP of the DNS server is 10.76.4.11, but if I nslookup 10.76.4.11 it forwards the request to 10.4.36.10 and goes nowhere. I have since removed the forwarder, but still any nslookup requests on workstations are going to 10.4.36.10. If I nslookup 10.76.4.11 on the server it can resolve its name, but for some reason when it receives the same request from workstations it doesnt know what to do. All the A, CNAME records etc are correct. DHCP's DNS is set correctly, GPOs are correct (even though they cant refresh cos of this problem!), the servers network adapter has its DNS set to 10.76.4.11. Just don't know. Very confused.

    Read the article

  • Sticky sessions not sticky on coldfusion cluster

    - by GreatSeaSpider
    we're trying to deploy a legacy coldfusion site onto a new CF8 cluster. The cluster consists of three cf instances running under JRUN4 on a single windows 2008 server. I've got the cluster set to not replicate sessions, and sticky sessions turned on. each instance is set to use J2EE session variables. The application tag for the site has: sessionmanagement="Yes" setclientcookies="Yes" setdomaincookies="Yes" when each instance starts... no errors are reported in the instance log and they join the cluster without any issues. though the intances do have: 16/10 08:31:25 info SessionReplicationService successfully joined a JINI lookup service (assigned JINI-ID .....) and 16/10 08:31:25 info Clusterable service SessionReplicationService discovered a SessionReplicationService peer on a JRun server named "xxxx" on host xxxx which is interesting since session replication is definately off, is the SessionReplicationService responsible for sticky sessions aswell? thats enough background, the problem is that the sticky sessions appear to simply not work, each request is bounced to a different instance, and it seems as if the sessions are being lost on each instance anyways? As soon as the cluster is down to a single instance, the web app works exactly as expected and the sessions seem fine. has anybody got any ideas for me? i've been trawling the web and I cant seem to find any answers.

    Read the article

  • IP Tables won't save the rule.

    - by ArchUser
    Hello, I'm using ArchLinux and I have an IP tables rule that I know works (from my other server), and it's in /etc/iptables/iptables.rules, it's the only rule set in that directory. I run, /etc/rc.d/iptables save, then /etc/rc.d/iptables/restart, but when I do "iptables --list", I get ACCEPTs on INPUT,FORWARD & OUTPUT. # Generated by iptables-save v1.4.8 on Sat Jan 8 18:42:50 2011 *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT ACCEPT [216:14865] :BRUTEGUARD - [0:0] :interfaces - [0:0] :open - [0:0] -A INPUT -p icmp -m icmp --icmp-type 18 -j DROP -A INPUT -p icmp -m icmp --icmp-type 17 -j DROP -A INPUT -p icmp -m icmp --icmp-type 10 -j DROP -A INPUT -p icmp -m icmp --icmp-type 9 -j DROP -A INPUT -p icmp -m icmp --icmp-type 5 -j DROP -A INPUT -p icmp -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -j interfaces -A INPUT -j open -A INPUT -p tcp -j REJECT --reject-with tcp-reset -A INPUT -p udp -j REJECT --reject-with icmp-port-unreachable -A INPUT -p tcp -m tcp ! --tcp-flags FIN,SYN,RST,ACK SYN -m state --state NEW -j DROP -A INPUT -f -j DROP -A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG FIN,SYN,RST,PSH,ACK,URG -j DROP -A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP -A INPUT -i eth+ -p icmp -m icmp --icmp-type 8 -j DROP -A BRUTEGUARD -m recent --set --name BF --rsource -A BRUTEGUARD -m recent --update --seconds 600 --hitcount 20 --name BF --rsource -j LOG --log-prefix "[BRUTEFORCE ATTEMPT] " --log-level 6 -A BRUTEGUARD -m recent --update --seconds 600 --hitcount 20 --name BF --rsource -j DROP -A interfaces -i lo -j ACCEPT -A open -p tcp -m tcp --dport 80 -j ACCEPT -A open -p tcp -m tcp --dport 10011 -j ACCEPT -A open -p udp -m udp --dport 9987 -j ACCEPT -A open -p tcp -m tcp --dport 30033 -j ACCEPT -A open -p tcp -m tcp --dport 8000 -j ACCEPT -A open -p tcp -m tcp --dport 8001 -j ACCEPT -A open -s 76.119.125.61 -p tcp -m tcp --dport 21 -j ACCEPT -A open -s 76.119.125.61 -p tcp -m tcp --dport 3306 -j ACCEPT -A open -p tcp -m tcp --dport 22 -j BRUTEGUARD -A open -s 76.119.125.61 -p tcp -m tcp --dport 22 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT COMMIT # Completed on Sat Jan 8 18:42:50 2011

    Read the article

  • How do I connect two computers with a LAN cable?

    - by John
    I have two machines - Windows XP and a laptop using Windows 7. I connected them with a WLAN cable. On the Windows XP machine, I set the IP address to 192.168.0.10. On the Windows 7 laptop, I set the IP address to 192.168.0.20. The laptop can see the Windows XP machine, but Windows XP machine cannot see the Windows 7 machine. But this does NOT concern me. I want to move the files from my desktop (Windows XP) to Windows 7 (laptop). That's why I'm going through all this. The problem is that when I try to connect from Windows 7 to Windows XP machine, I get this window: I don't understand what username/password is needed. I use none on the Windows XP machine. I tried all usernames - no success. Please explain in deep details how to solve my problem so I can connect to my Windows XP machine. EDIT: Maybe this can help: the Windows XP machine is named 'I' and '???????? III' is the name of the laptop. Both computers share one workgroup - WORKGROUP.

    Read the article

  • 426 Connection closed; transfer aborted.

    - by Jiaoziren
    Hi, I have an IIS FTP set up on Windows 2003 SP2 (S1). Everyday in the early morning, a script on another server (S2) will run and initiate FTP transfer of pulling log files from S1 to S2. The FTP client we're using is built-in FTP.exe in Windows 2000 on S2. Recently we replaced S1 with a new server however we kept the IP address. There are multiple IP addresses on new S1. Ever since the new S1 was in place, the '426 Connection closed; transfer aborted.' errors haven been occuring randomly. The log indicated that the transfer started ok however the file cannot be transferred completely, as per log below: mget access*.log 200 Type set to A. 200 PORT command successful. 150 Opening ASCII mode data connection for access02232010.log(205777167 bytes). 426 Connection closed; transfer aborted. ftp: 20454832 bytes received in 283.95Seconds 72.04Kbytes/sec. The firewall monitor suggested that the connection was setup in passive mode however I've been told that MS FTP.exe doesn't support passive mode. Though I can see the response of 'entering passive mode' from server when typing in 'quote pasv'. My network admin has told me to try the transfer in active mode however I don't know how to open active mode on client side. It's getting really frustrating. Wish someone here has the right knowledge/experience could shed me a light. Cheers.

    Read the article

  • Why do weekly tasks created via PowerShell using a different user fail with error 0x41306

    - by Danny Tuppeny
    We have some scripts that create scheduled jobs using PowerShell as part of our application. When testing them recently, I noticed that some of them always failed immediately, and no output is ever produced (they don't even appear in the Get-Job list). After many days of tweaking, we've managed to isolate it to any jobs that are set to run weekly. Below is a script that creates two jobs that do exactly the same thing. When we run this on our domain, and provide credentials of a domain user, then force both jobs to run in the Task Scheduler GUI (right-click - Run), the daily one runs fine (0x0 result) and the weekly one fails (0x41306). Note: If I don't provide the -Credential param, both jobs work fine. The jobs only fail if the task is both weekly, and running as this domain user. I can't find information on why this is happening, nor think of any reason it would behave differently for weekly jobs. The "History£ tab in the Task Scheduler has almost no useful information, just "Task stopping due to user request" and "Task terminated", both of which have no useful info: Task Scheduler terminated "{eabba479-f8fc-4f0e-bf5e-053dfbfe9f62}" instance of the "\Microsoft\Windows\PowerShell\ScheduledJobs\Test1" task. Task Scheduler stopped instance "{eabba479-f8fc-4f0e-bf5e-053dfbfe9f62}" of task "\Microsoft\Windows\PowerShell\ScheduledJobs\Test1" as request by user "MyDomain\SomeUser" . What's up with this? Why do weekly tasks run differently, and how can I diganose this issue? This is PowerShell v3 on Windows Server 2008 R2. I've been unable to reproduce this locally, but I don't have a user set up in the same way as the one in our production domain (I'm working on this, but I wanted to post this ASAP in the hope someone knows what's happening!). Import-Module PSScheduledJob $Action = { "Executing job!" } $cred = Get-Credential "MyDomain\SomeUser" # Remove previous versions (to allow re-running this script) Get-ScheduledJob Test1 | Unregister-ScheduledJob Get-ScheduledJob Test2 | Unregister-ScheduledJob # Create two identical jobs, with different triggers Register-ScheduledJob "Test1" -ScriptBlock $Action -Credential $cred -Trigger (New-JobTrigger -Weekly -At 1:25am -DaysOfWeek Sunday) Register-ScheduledJob "Test2" -ScriptBlock $Action -Credential $cred -Trigger (New-JobTrigger -Daily -At 1:25am)

    Read the article

  • Problem recreating BCD on Windows 7 64bit - The requested system device cannot be found

    - by Domchi
    NVIDIA drivers upgrade crashed my Windows 7 installation, so I'm working to undo the damage. What I can do: I can boot Windows install from the USB drive, and I can boot the Hiren's Boot CD. Although automated Windows repair fails, I can get to command prompt when I boot Windows install from USB drive, and I can see my drive and all my data. What I cannot do: I cannot boot into Windows - I get this message: Windows failed to start. A recent hardwware or software change might be the cause. To fix the problem: 1.insert windos cd and run a repair your computer option. File: /boot/bcd Status: 0xc000000f Info: an error occured while attempting to read the boot configuration data. It seems that something is wrong with my /Boot/BCD, so I'm trying to recreate it from scratch. I've tried all the methods detailed here (including Windows repair which fails), and I'm left with the last one (near the bottom of that page). When I type the following command as in the tutorial: bcdedit.exe /import c:\boot\bcd.temp ...it fails with the following error: The store import operation has failed. The requested system device cannot be found. Many Google results say that I must use diskpart to set my partition active, however it's already set as active. Also, when I try this: bcdedit /enum It fails with similar message: The boot configuration data store could not be opened. The requested system device cannot be found. Does anyone know what does that error message mean, and what is the requested system device? I'd like to avoid having to reinstall Windows since all the files on disk seem to be fine.

    Read the article

  • SQL Server 2008 R2 transactional replication over VPN

    - by enashnash
    I'm having difficulty setting up replication over a VPN. I have a SQL Server 2008 R2, Enterprise Edition database on a Windows 2008 R2 Server. SQL Server is running on a non-standard port. I have set it up so that it is acting as its own distributor and have configured a publisher on this server. It is set as an updatable transational publication (yes, this is necessary). On this server, I have Routing and Remote Access enabled in order to be able to establish VPN connections. It is configured with a static IP address pool, of which the first in the range is always assigned to the server. I have assigned a test user a static address within this range (I don't know if this is necessary or not). All clients will be 2008 R2 versions, but could be SQL Express or standalone developer instances of the full product. I can establish a VPN connection from the client without problems and can see that the correct IP addresses are allocated. After connecting to the database to test that I can establish a connection, I realised that I needed to be able to connect to the database using the server name rather than an IP address - required for replication - which wouldn't work initially. I created an entry in the hosts file for the server on the client using the NETBIOS name of the server, and now I can connect to the server, from the client, using the SERVER\INSTANCE, PORT syntax, over the VPN. As it is the default instance on the server, I can also connect with simply SERVER, PORT syntax. After all that, I still get the following dreaded error: SQL Server replication requires the actual server name to make a connection to the server. Connections through a server alias, IP address, or any other alternate name are not supported. Specify the actual server name, 'SERVER\INSTANCE'. (Replication.Utilities). What have I missed? How do I get this to work? TIA

    Read the article

  • Not Able To Connect to Windows Server 2008 R2 using FileZilla Externally

    - by obautista
    I configured FTP Service/Role on my Windows Server 2008 R2 machine. I am able to connect from the inside, but not from the outside. On the inside I tested using cmd prompt and IE FTP. On the outside, I am testing with FileZilla and IE FTP. From the outside, IE FTP prompts me to enter my username/pwd, but nothing happens. Page eventually times out and I get "Internet Explorer cannot display page". Using FileZilla, I get the following messages. Note FileZilla resolved domain name and authenticates. I did not configure FTP Wirewall Support on the FTP site. I am not sure if I need to do this. I set up basic authentication, non-ssl, not allowing anonymous. I testing with Windows Firewall Turned off and on (I added windows firewall rule for port 21). On my network firewall (Cisco), I added a rule to forward port 21 traffic to FTP Server. Status: Resolving address of ftp.technologyblends.com Status: Connecting to 75.149.66.201:21... Status: Connection established, waiting for welcome message... Response: 220 Microsoft FTP Service Command: USER * Response: 331 Password required for . Command: PASS *** Response: 230 User logged in. Command: SYST Response: 215 Windows_NT Command: FEAT Response: 211-Extended features supported: Response: LANG EN* Response: UTF8 Response: AUTH TLS;TLS-C;SSL;TLS-P; Response: PBSZ Response: PROT C;P; Response: CCC Response: HOST Response: SIZE Response: MDTM Response: REST STREAM Response: 211 END Command: OPTS UTF8 ON Response: 200 OPTS UTF8 command successful - UTF8 encoding now ON. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I. Command: PASV Error: Connection timed out Error: Failed to retrieve directory listing

    Read the article

  • Replace text with spaces in MySQL

    - by javipas
    I'm trying to do a global replace of search in my database, which has a lot of articles with a double carriage return because of this code: <p> </p> I'd like to replace this in my WordPress blog so instead of that appears... nothing, and so I can delete the CR. I've tried this on my database UPDATE wp_posts set post_content = replace (post_content,'<p> </p>',''); but didn't work. Why? Do I have to add special thinks to consider the space between the <p>and the</p>? Mmm. Good points, both Jon Angliss and Wim. Jon, as you could have guessed, the database shows no entries with that text string. So there's something going on inside the post_content field. Wim, the famous   was replaced previously, but there are still hundreds of posts that for some reason have something different between the p and the /p tags. I've done a search of one of the posts with this error: mysql> select * from wp_posts where post_title like '%3DVisionLive%'; And looking in the wp_content field, this is a little piece of the post: Phil Eisler, responsable de la divisi?n 3D Vision.?</p> <p>?</p> <p>Este portal ser? por tanto No spanish tilde (accent) shown on the terminal, and instead of an space there's a quotation mark between the p and the /p tags. I've tried to replace <p>?</p>, but again, no results. There's some character (or several) there, but I don't know how to discover that. Maybe it's the character set of my terminal, but I've accessed the database from phpmyadmin and in that case there's a space character between the p and the /p. Weird.

    Read the article

  • hMail server - sending copy of an e-mail changing the sender

    - by Beggycev
    Dear All please help me with following request. I am using hMail server in a company(test.com) and have several hundred of guest e-mail accounts ([email protected]). I need to accomplish this: When any of the guest e-mails receives a message(either from internal or external sender) this e-mail(or its copy) is sent to another address "[email protected]" which is the same for all of these guest e-mails. But I need the sender to be identified as the [email protected] not as the original sender which happens when I use forwarding. I tried to prepare a simple VBS script using the OnAcceptMessage event to accomplish this. and it works on my testing machine without internet connectivity but not in the production environment. To be specific, if I send an e-mail to [email protected] in my test env it is delivered to the [email protected] with [email protected] being a sender. But in the production env the e-mail stays in the guest mailbox with the original sender. I am interested in any solution, using a rule in hMail or script, anything is welcome. Thank you for any help! The script: Sub OnAcceptMessage(oClient, oMessage) 'creating application object in order to perform operations as hMail server administrator Dim obApp Set obApp = CreateObject("hMailServer.Application") Dim adminLogin Dim adminPassword 'Enter actual values for administrator account and password 'CHANGE HERE: adminLogin = "Admin_login" adminPassword = "password" Call obApp.Authenticate(adminLogin, adminPassword) Dim addrStart 'Take first 5 characters of recipients address addrStart = Mid(oMessage.To, 1, 5) 'if the recipient's address start with "guest" if addrStart = "guest" then Dim recipient Dim recipientAddress 'enter name of the recipient and respective e-mail address() 'CHANGE HERE: recipient = "FINAL" recipientAddress = "[email protected]" 'change the sender and sender e-mail address to the guest oMessage.FromAddress = oMessage.To oMessage.From = oMessage.To & "<" & oMessage.To & ">" 'delete recipients and enter a new one - the actual mps and its e-mail from the variables set above oMessage.ClearRecipients() oMessage.AddRecipient recipient, recipientAddress 'save the e-mail oMessage.save end if End Sub

    Read the article

  • What's the risk of running a Domain Controller so that it is accessible from the internet?

    - by Adrian Grigore
    I have three remote dedicated web servers at different webhosts. Adding them to a common domain would make a lot of administration tasks much easier. Since two of the servers are running Windows 2008 R2 Standard, I thought about promoting them to Domain Controllers in order to set up the windows domain. There's another thread at Serverfault that recommends this. At the same time I've read a lot of times on different websites that this is not a good idea because an domain controller should always be behind a firewall LAN. But I can't set up something like this because I don't have a LAN with a static IP accessible from the internet. In fact I don't even have a windows server in my LAN. What I have not found out is why exposing a DC to the Internet would be bad idea. The only risk I can see is that if someone penetrates one of my webservers, it should be much easier to penetrate the others as well. But as far as I can see that's the worst case scenario since I am only going my web servers to that domain, not any computers from my local network. Is this the only downside or does it also make it easier to penetrate one of my web servers in the first place? Thanks, Adrian

    Read the article

  • Fix stubborn 'Setting locale failed.'

    - by plua
    I have a very stubborn, well-known locale error on Ubuntu 9.10: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_TIME = "custom.UTF-8", LANG = "en_US.UTF-8" Tried the following: Added LANG=en_US.UTF-8 and LC_ALL=en_US.UTF-8 to /etc/environment Run apt-get install --reinstall locales (error: perl: warning: Falling back to the standard locale ("C"). /usr/bin/mandb: can't set the locale; make sure $LC_* and $LANG are correct) Run sudo dpkg-reconfigure locales. Result: Cannot set LC_ALL to default locale: No such file or directory, and then updates locales all locales including en_US.UTF-8 sudo locale-gen updates all locales successfully, including en_US.UTF-8 sudo locale-gen un_US en_US.UTF-8 gives no error nor other output In /etc/default/locale it says LANG="en_US.UTF-8" echo $LANG gives en_US.UTF-8 /var/lib/locales/supported.d/local says en_US.UTF-8 UTF-8 locale -a gives me: C en_AG en_AU.utf8 en_BW.utf8 en_CA.utf8 en_DK.utf8 en_GB.utf8 en_HK.utf8 en_IE.utf8 en_IN en_NG en_NZ.utf8 en_PH.utf8 en_SG.utf8 en_US.utf8 en_ZA.utf8 en_ZW.utf8 POSIX So well... I am pretty much out of options I can think of. Anybody any idea?? Thanks!

    Read the article

  • Subversion 1.6 + SASL : Only works with plaintext 'userPassword'?

    - by SiegeX
    I'm attempting to setup svnserve with SASL support on my Slackware 13.1 server and after some trial and error I'm able to get it to work with the configuration listed below: svnserve.conf [general] anon-access = read auth-access = write realm = myrepo [sasl] use-sasl = true min-encryption = 128 max-encryption = 256 /etc/sasl2/svn.conf pwcheck_method: auxprop auxprop_plugin: sasldb sasldb_path: /etc/sasl2/my_sasldb mech_list: DIGEST-MD5 sasldb users $ sasldblistusers2 -f /etc/sasl2/my_sasldb test@myrepo: cmusaslsecretOTP test@myrepo: userPassword You'll notice that the output of sasldblistusers2 shows my test user as having both an encrypted cmusaslsecretOTP password as well as a plain text userPassword passwd. i.e., if I were to run strings /etc/sasl2/my_sasldb I would see the test users' password in plaintext. These two password entries were created with the following subversion book recommended command: saslpasswd2 -c -f /etc/sasl2/my_sasldb -u myrepo test After reading man saslpasswd2 I see the following option: -n Don't set the plaintext userPassword property for the user. Only mechanism-specific secrets will be set (e.g. OTP, SRP) This is exactly what I want to do, suppress the plain text password and only use the mechanism-specific secret (OTP in my case). So I clear out /etc/sasl2/my_sasldb and rerun saslpasswd2 as: saslpasswd2 -n -c -f /etc/sasl2/my_sasldb -u myrepo test I then follow it up with a sasldblistusers2 and I see: $ sasldblistusers2 -f /etc/sasl2/my_sasldb test@myrepo: cmusaslsecretOTP Perfect! I think, now I have only encrypted passwords.... only neither the Linux svn client nor the Windows TortoiseSVN client can connect to my repo anymore. They both present me with the user/pass challenge but that's as far as I get. TLDR So, what is the point of SVN supporting SASL if my sasldb must store its passwords in plaintext to work?

    Read the article

  • Apache MaxClients reaching max and locking the server

    - by Rodrigo Sieiro
    Hi. I currently have an Apache2 server running with mpm-prefork and mod_php on a OpenVZ VPS with 512M real / 1024M burstable RAM (no swap). After running some tests, I found that the maximum process size Apache gets is 23M, so I've set MaxClients to 25 (23M x 25 = 575 MB, ok for me). I decided to run some load tests on my server, and the results left me puzzled. I'm using ab on my desktop machine requesting the main page from a wordpress blog. When I run ab with 24 concurrent connections, everything seems fine. Sure, CPU goes up, free RAM goes down, and the result is about 2-3s response time per request. But if I run ab with 25 concurrent connections (my server limit), Apache just hangs after a couple of seconds. It starts processing the requests, then it stops responding, CPU goes back to 100% idle and ab times out. Apache log says it reached MaxClients. When this happens, Apache keeps itself locked up with 25 running processes (they're all in "W" if I check server status) and only after the TimeOut setting the processes start to die and the server starts responding again (in my case it's set to 45). My question: is that expected behaviour? Why Apache just dies when it reaches MaxClients? If it works with 24 connections, shouldn't it work with 25, just taking maybe more time to respond each request and queueing up the rest? It sounds kinda strange to me that any kid running ab can alone kill a webserver just by setting the concurrent connections to the servers MaxClients.

    Read the article

  • Can a wifi AP act as a client, and a server at the same time?

    - by nbolton
    I feel this is SF worthy (as opposed to SU) as I go into a bit of detail on gateways/routing. Here's my ideal setup (if possible) -- there is a wifi network (lets call it bob's) with which I want access to, but I have a few other computers on my network which I want to keep behind a firewall. So I was thinking of buying a wireless access point so that I could set it up to connect to bob's network from the AP, and then from my server, connect to the AP via ethernet. So that's the first bit. Second part is that I want to have my own private wifi network off the back of this; can I then tell the AP to serve a new network called foobar. When I say private network, I mean that my server is actually a Debian linux install with routing configured (and I also do some QoS stuff on, etc). So ideally, I'd like all the clients on the private network to be behind the server in terms of routing. However, if the private clients connect to the server via wifi, then aren't they exposed to the "public" network? That is, if someone is savvy enough to scan for my IP range. Also, to do routing I'd need to connect two ethernet cables between the server and the AP (because you can't do routing/QoS on virtual devices) -- which isn't a problem really; but I'm not sure whether the AP will allow me to separate the public and private LANs. Or, as well as the AP, am I better getting a wifi-to-ethernet adapter for the server? I could use a wifi usb, but this can be tricky to set up on headless linux; plus the signal strength is a bit lousy. If this question is a bit vague/spurious in places, please comment and I will explain in more detail.

    Read the article

< Previous Page | 992 993 994 995 996 997 998 999 1000 1001 1002 1003  | Next Page >