Search Results

Search found 11674 results on 467 pages for 'adding'.

Page 330/467 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • Two servers, two domains, one ip. mod_proxy beginner

    - by Gutsav
    I run two virtual web servers (both running apache2 on debian). I have just one external IP, but two domains, and I want a domain going to each of the servers. I've understood that I need a Reverse Proxy, and I enabled both the mod_proxy and the mod_proxy_http modules on the "primary server". Do I need to enable anything on the "secondary server"? I also understood that I need to write some things in a virtual host file, but what? On the primary server, I have a virtual host file for one of the domains, and some for subdomains. I want domain1.tld to go to the primary server (port 80 is forwarded to it, so that works) and domain2.tld to go to the other server (internal ip 192.168.0.x). No ports needs to be forwarded to it, right? So, what to add and in which virtual host file? Or a new one? Other questions suggest adding ProxyPass and ProxyPassReverse, but I'm lost anyway, and I just don't understand the apache documentation. Thanks in advance

    Read the article

  • GlassFish v2.1 -- getting Application Client and Eclipselink to work together?

    - by Nick
    We are trying to use Eclipselink 1.1 with Glassfish v2.1. Following the instructions on: http://wiki.glassfish.java.net/Wiki.jsp?page=FaqEclipseLinkGlassFishV2 I adapted the instructions for the appclient script on linux by adding the lines: APPCPATH=$APPCPATH:$AS_INSTALL/lib/eclipselink-1.1.1.jar export APPCPATH to the appclient shell script. This however is not working. On running the application client (using Glassfish's webstart), I get the error: WARNING: "IOP00810257: (MARSHAL) Could not load class org.eclipse.persistence.indirection.IndirectList" Anyone else succeed in getting GF v 2.1 to work with eclipselink? or any ideas on what I might be doing wrong? I found this bug report: http s://glassfish.dev.java.net/issues/show_bug.cgi?id=8204 (New users can't post more than 1 link, so remove the space between 'http' and 's'.) Where Tim Quinn (tjquinn) said: App client container support for persistence is not yet in place I think this refers only to Glassfish v3, and it should be working in Glassfish v2. Is this correct? I'm working on the assumption that this will work once the ACC knows where to find the eclipselinks jar. Thanks in advance, Nick.

    Read the article

  • How Does EoR Design Work with Multi-tiered Data Center Topology

    - by S.C.
    I just did a ton of reading about the different multi-tier network topology options as outlined by Cisco, and now that I'm looking at the physical options (End of Row (EoR) vs Top of Rack(ToR)), I find myself confused about how these fit into the logical constructs. With ToR it also maps 1:1: at the top of each rack there is a switch(es) that essentially act as the access layer. They connect via fiber to other switches, maybe chassis-based, that act as the aggregation layer, that then connect to the core layer. With EoR it seems that the servers are connecting directly to the aggregation layer, skipping the access layer all together, by plugging directly into what are typically chassis switches. In EoR then is the standard 3-tier model now a 2-tier model: the servers go to the chassis switch which goes straight to the core switch? The reason it matters to me is that my understanding was that the 3-tier model was more desirable due to less complexity. The agg switch pair acts as default gateway and does routing; if you use up all of your ports in your agg layer pair it's much more complicated to add additional switches, than simply adding more switches at the access layer. Are there other downsides to this layout? Does this 3-tier architecture still apply in some way in EoR? Thanks.

    Read the article

  • Is there a better way to do bonded vlan tagged interfaces with XEN

    - by AJ01
    We have a number of XEN servers all running CentOS or RHEL. The VM's that they run are all required to be on their own VLAN for no other reason than the customer expects them to be. Long story short however, I can't change this right now. We are also required to have bonding enabled on the interfaces. So to accommodat this we enslave eth1 and eth2 to bond0. We then create a seperate interface called bond0.VLANID where VLANID corresponds to the correct vlan; eg ifcfg-bond0.204 DEVICE=bond0.204 BOOTPROTO=static ONBOOT=yes VLAN=yes BRIDGE=xenvlan204 Bridge to XEN As you will see, we eventually have to bridge this out to XEN, and we do this by adding another interface called xenvlan204 (in this instance) which contains; ifcfg-xenvlan204 DEVICE=xenvlan204 BOOTPROTO=none ONBOOT=yes TYPE=bridge XEN Vm Config Finally in our XEN config for each VM, we add vif = [ "bridge=xenvlan204" ] This then allows the vm host to access that particular vlan The Problem We've noticed a few problems with this setup. One being that we currently create the interfaces manually. Which means if we add more vlan enabled interfaces and bridges we usually have to restart xend which is something I'm not so hot about. Also lower level staff have their heads melted by the number of interfaces and the risk of a mistake occurring is high. Secondly, it can take sometime for a host to come up if it has a number of vlan taged interfaces. Thirdly, its just not scaling well on the management aspects The Question Is there a better more flexible way to do this (in particular with Xen that ships with centos 5.3, 5.4 and 5.5 as we have to support all three) that leverages either scripting or other solutions to allow an arbitrary amount of interfaces to be created when a vm is instanced. Your advise and expertise is more that welcomed.

    Read the article

  • Windows-to-linux: Putty with SSH and private/public key pair

    - by Johnny Kauffman
    I spent about 3 hours trying to figure out how to connect to a linux box from my windows machine using putty without having to send the password. This is connecting to an Ubuntu server that is using OpenSSH. The private key is SSH-2 RSA, 1024 bits. I am connecting using SSH2. I have run into the more common problems already: Putty generated the public key in the "wrong format". I have corrected this (as seen on this blog post). However, since I am not yet connected, I cannot absolutely confirm that this file is in the correct format. The key is all on a single line now, and I have tried adding/removing line breaks at the end of the file. I've also tried the public file doctoring process a few times to ensure that I haven't flubbed up the manual conversion. Even so, I have no way to verify accuracy here. The permissions were at once point wrong as well, specifically meaning that the file had too many permissions. I had to solve this too and I know it got past this because I no longer see a related error in /var/log/auth.log. I've tried both authorized_keys and authorized_keys2 in case the server has an old version of OpenSSH, but this changed nothing. I do have access as a user. After this keyfile stuff fails, I can enter my password instead The only remaining nibble of information I have is that it claims I have the alleged password wrong: sshd[22288]: Failed password for zzzzzzz from zz.zz.zz.zz port 53620 ssh2 Even so, as far as I can tell, this is just a lazy try/catch somewhere, since I don't think there's a password involved at all. I see nothing else in any of the /var/log files of use. What else could be wrong?

    Read the article

  • What's wrong with my .htaccess? Trying to simplify actual code

    - by AlexV
    This is my actual .htaccess: #If the requested URI does not end with an extension RewriteCond %{REQUEST_URI} !\.(.*) #If the requested URI is not in an excluded location RewriteCond %{REQUEST_URI} !^/(excluded1|excluded2)/ #Then serve the URI via the mapper RewriteRule .* /seo-urls/seo-urls-mapper.php?uri=%{REQUEST_URI} [L,QSA] #If the requested URI ends with .php* RewriteCond %{REQUEST_URI} \.php.*$ [NC] #If the requested file is not seo-urls-mapper.php (avoid .htaccess loop) RewriteCond %{REQUEST_FILENAME} (?<!seo-urls-mapper)\.php.*$ #Then serve the URI via the mapper RewriteRule .* /seo-urls/seo-urls-mapper.php?uri=%{REQUEST_URI} [L,QSA] Since all conditions are compatibles except the 1st ones (no extension and *.php* match) all I should have to do is to add the [OR] condition to these 2 lines, but when I'm adding it it's not working (my no extension rule don't work anymore). This is my new (not working) code: #If the requested URI does not end with an extension OR if the URI ends with .php* RewriteCond %{REQUEST_URI} !\.(.*) [OR] RewriteCond %{REQUEST_URI} \.php.*$ [NC] #If the requested file is not seo-urls-mapper.php (avoid .htaccess loop) RewriteCond %{REQUEST_FILENAME} (?<!seo-urls-mapper)\.php.*$ #If the requested URI is not in an excluded location RewriteCond %{REQUEST_URI} !^/(excluded1|excluded2)/ #Then serve the URI via the mapper RewriteRule .* /seo-urls/seo-urls-mapper.php?uri=%{REQUEST_URI} [L,QSA] Hopefully someone will be able to clarify this issue... I guess I don't fully understand the use of [OR]. Thanks!

    Read the article

  • VCL - configuration for Magento and Varnish 3.0.2

    - by Tomas
    I would like to kindly ask if there's someone who can help me configure Varnish for Magento to reach far more hits. My current ratio from varnishstat is: cache_hit=271 cache_miss=926 I'm kindly asking this because I've googled almost every site related to this theme, but 99.9% of configurations don't work because of outdated code. Details of my set-up: I use Varnish on port 80, Apache on port 81, PageCache as Magento varnish module, APC for PHP speed and Memcached for dynamic caching. Load speed is about 1.5s on home-page (Pingdom.com average results) USA ping & 2.5s Europe. Servers are located in Toronto, Canada. EDIT: This is my full VCL configuration http://pastebin.com/885BzHCs (I just use xxx.xxx.xxx.xxx for my IPs) This is the info from the command (varnishtop -i TxHeader -I Cookie): TxHeader Cookie: frontend=965b5...(*lots of numbers); adminhtml=3ae65...(*lots of numbers); EXTERNAL_NO_CACHE=1 "(*lots of numbers)" is just my adding to the info Any idea how to avoid Varnish hitting this cookies? (If I got correctly the idea about avoiding Vanrish hitting the cookie and not caching the home page). Thank you for any help!

    Read the article

  • How to setup a hyper-v domain with internet access

    - by fynnbob
    First off let me say that I'm not a network admin or server guy, I know very little about that stuff. What I'm trying to do is setup a virtualized domain using hyper-V. Here is the configuration: Physical Server: 4Mb RAM Windows Server 2008 R2 running Hyper-V Virtual Environment: One Domain Controller running Windows Server 2008 R2 One Client running Windows Server 2008 R2 I have been successful in setting up a virtual domain controller and adding a virtual client to that domain controller but I'm stuck at trying to give the virtual Environment Internet access. I can give the client VM Internet access if I remove them from the virtual domain but once I add them back to the virtual domain, Internet access is gone. I've read articles describing many different ways this can be done (using RRAS with NAT, using a wireless connection, etc...) but all of those articles only cover a small piece of the setup and also seem to be geared towards people who know there way around networking and servers which I don't. I'd like to know more but my thing is software development and I have my hands full trying to keep up with everything in that realm. I simply want to setup a virtual domain with Internet access for testing. Can anyone point me to any "for Dummy's" type information on how to setup this type of environment or can anyone provide this kind of step-by-step help. Any help would be very much appreciated.

    Read the article

  • How Do I Use Multiple Versions of OpenSSL ... One for Apache and one for PHP

    - by Ken S.
    I have an Apache 2.2 (self-compiled version) server that is getting dinged during a PCI scan because it does not support TLS 1.1 or 1.2 ciphers. After some digging I found that the installed version of OpenSSL (0.9.8e) does not contain the newest TLS ciphers. So I went and downloaded and compiled the latest version of OpenSSL (1.0.1c) and have it installed in an alternate location within /opt so it wouldn't interfere with the installed version. What I would like to do is to compile Apache against the 1.0.1 libraries and keep the system-installed libraries for use with PHP, cURL, openssh, etc. I'm hoping that doing it this way will allow Apache to use the newest TLS but not break anything with any other programs that require the old libraries. I thought I could do this by adding an entry in to /etc/ld.so.conf that pointed to the new libraries, but I think this will conflict with the existing ones. i.e. two references to libcrypto could cause everything to have issues. The main reason for doing this is because of issues with PHP cURLing to external servers and having issues with the latest OpenSSL libs thus requiring edits to our PHP code. Would love some guidance on how best to accomplish this.

    Read the article

  • Any dangers in using DDR memory with a higher frequency than the FSB?

    - by raw_noob
    I'm looking to upgrade memory in an older motherboard. The processor is an AMD Sempron 2500+ with a maximum speed of 333/166MHz. The motherboard is an MSI MS-7061 (KV3M-V), which accepts up to 2Gb of DDR memory maximum PC2700 in 2 slots and has a maximum FSB of 333MHz. The board does not have dual-channel support. Existing memory includes a stick of 512Mb PC3200, which seems to be running OK (presumably at PC2700) but is rated 200MHz, which is below the FSB speed. The other stick is 256Mb PC2100/133MHz, again below the FSB speed. (All figures from CPU-Z.) I have a chance to acquire a single used stick of PC3200/400MHz memory very cheaply. Crucial's system scanner seems to suggest that this will be OK with my system, but other sites have suggested that running memory with a higher frequency than the FSB can cause instability. Is this true? Would I be better waiting until I can buy the correct PC2700/333MHz stick? I'm assuming that the mixed memory I have at present is running as 768Mb at 133MHz. Is this a reasonable assumption? If so, would you expect the performance differences between 768Mb/133MHz and 1Gb/333MHz to be very noticeable? If I install the new 1Gb/400 or 333MHz stick in slot 1, am I right in thinking that adding back the existing 512Mb/200MHz stick in slot 2 would pull the whole 1.5Gb system memory speed down to 200MHz? If so, which would be better - 1.5Gb/200MHz, or the single 1Gb stick at the full 333MHz that the FSB permits? Is more headroom more important than extra speed? Any help - or even opinions - gratefully received. I can't find reliable information, and I can't afford to make expensive mistakes.

    Read the article

  • Administrative shares in Windows 7 Pro not visible

    - by Chris Tybur
    My desktop machine has a clean install of Windows 7 Professional. For some reason the standard administrative shares Admin$, C$, D$, etc are not visible, either in Computer Management - Shared Folders - Shares or via net share. I also have a laptop with a clean install of Windows 7 Professional, and I can see the admin shares in both places. As such, I can map to \\laptop\c$ from the desktop, but I can't map to \\desktop\c$ from the laptop. I pretty much took the defaults during the Windows 7 installations. I've tried adding LocalAccountTokenFilterPolicy to the registry on the desktop, but that didn't work. On the desktop I've also disabled UAC, turned off Windows firewall, removed it from a homegroup, made sure file and printer sharing is turned on, but nothing has worked. There is some subtle difference between the two machines that I can't seem to find. I'm logging into both machines using a local account that is in the Administrators group. Both accounts have the same name and password. I really don't want to have to create a new share for the desktop's C drive, especially since C$ is visible and working on the laptop and therefore I should be able to make it work on the desktop. Any idea why the admin shares would work on one machine and not another? Or why LocalAccountTokenFilterPolicy would fail?

    Read the article

  • Unable to connect to second name of Windows 2008 Server R2 machine from XP

    - by Tumba
    I used the command netdom computername /add:newname.domainname.com to add a second name to a server running Windows 2008 Server R2. After restarting the server, I had DNS "A" entries for both names. In addition, the second name was added to HKLM\SYSTEM\CurrentControlSet\services\LanmanServer\Parameters\OptionalNames, which I believe should have taken care of any NetBIOS resolution. From my Windows 7 workstation, I can ping both names and running net view on both names reveals the same list of resources. From Windows XP, I can ping both names, but net view only works on the first name. Running net view on the second name returns: System error 52 has occurred. You were not connected because a duplicate name exists on the network. Go to System in Control Panel to change the computer name and try again. What do I need to do to make the second name usable from XP clients? Update: I was able to resolve the problem by adding the REG_DWORD key DisableStrictNameChecking = 1 to HKLM\SYSTEM\CurrentControlSet\services\LanmanServer\Parameters, then restarting the Server service. However, I do not understand why this was necessary.

    Read the article

  • How can I keep SSH's know_hosts up to date (semi-securely)?

    - by Chas. Owens
    Just to get this out in front so I am not told not to do this: The machines in question are all on a local network with little to no internet access (they aren't even well connected to the corporate network) Everyone who has the ability to setup a man-in-the-middle attack already has root on the machine The machines are reinstalled as part of QA procedures, so having new host keys is important (we need to see how the other machines react); I am only trying to make my machine nicer to use. I do a lot of reinstalls on machines which changes their host keys. This necessitates going into ~/.ssh/known_hosts on my machine and blowing away to old key and adding the new key. This is a massive pain in the tuckus, so I have started considering ways to automate this. I don't want to just blindly accept any host key, so patching OpenSSH to ignore host keys is out. I have considered creating a wrapper around the ssh command the will detect the error coming back from ssh and present me with a prompt to delete the old key or quit. I have also considered creating a daemon that would fetch the latest host key from a machine on a whitelist (there are about twenty machines that are being constantly reinstalled) and replace the old host key in known_hosts. How would you automate this process?

    Read the article

  • Removing trailing slashes in WordPress blog hosted on IIS

    - by Zishan
    I have a WordPress blog hosted in my IIS virtual directory that has all URLs ending with a forward slash. For example: http://www.example.com/blog/ I have the following rules defined in my web.config: <rule name="wordpress" patternSyntax="Wildcard"> <match url="*" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Rewrite" url="index.php" /> </rule> <rule name="Redirect-domain-to-www" patternSyntax="Wildcard" stopProcessing="true"> <match url="*" /> <conditions> <add input="{HTTP_HOST}" pattern="example.com" /> </conditions> <action type="Redirect" url="http://www.example.com/blog/{R:0}" /> </rule> In addition, I tried adding the following rule for removing trailing slashes: <rule name="Remove trailing slash" stopProcessing="true"> <match url="(.*)/$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Redirect" redirectType="Permanent" url="{R:1}" /> </rule> It seems that the last rule doesn't work at all. Anyone around here who has attempted to remove trailing slashes from WordPress blogs hosted on IIS?

    Read the article

  • Backup files from Linux client to Windows Server

    - by Andrew
    I'm trying to backup my files from my Linux box to my Windows Server 2008 as a push, and when I delete them from my Linux box, they remain on my Windows Server. I've found lots of sources that are similar, but most results were from Windows to Linux. I managed to find slightly more similar cases like Using rsync and cygwin to Sync Files from a Linux Server to a Windows Notebook PC, and rsync from Windows PC to remote Linux server, with the most similar being a backup from Linux to Windows Server, but through a pull from the Windows Server. Initially, I used Unison because I thought having the 2-way capability would come in handy, and I would just have to set some configurations to make it 1-way. Unfortunately, I couldn't find the right configuration, and only managed to synchronize using the command unison "profile" -ui text -auto -silent. When I deleted the files on my Linux box, the files in the Server got deleted too, which of course, isn't what I want. When I tried to find any options for Unison, I only discovered the -force option, which didn't help, since what I wanted was an incremental update to the Server. I found out I could achieve this from using rsync and the -a option (archive), which would keep adding files even if I deleted them from my Linux box. I installed Cygwin on my Windows Server, configured an SSH daemon, but I can't seem to get it working. I've also already configured Windows Firewall to open port 22 (both inbound and outbound). I used the following command from my Linux box: rsync -avrzn /folder/to/be/backed/up/ [email protected]:/cygdrive/c/place/to/store/backed/up/files (a - archive, v - verbose, r - recurse into subdirectories, z - compress, n - dryrun) but it just won't work. Can anyone help me out? I don't mind using either Unison or rsync, as long as it achieves what I want.

    Read the article

  • Http header 304 and caching?

    - by Royi Namir
    Our company uses these settings( don't ask me why) - for every request they want a new request from server. this is an intranet system which uses only IE. They defined it in : We also have windows authentication NTLM in the iis7. I have 2 questions please. Question #1) when the browser make a request ( css ) : (leave the 401 response for now - this is how ntlm works) He is requesting it with if-modified-since header. why is he adding this header ? How can I configure it ? why doesn't he use the settings from IE and try to download it each time - as I showed in the first picture ? Question #2) The response ( after ntlm negotiation) for that was : Response with Not-modified which is 304 header. and I assume its because we sent the request with the if-modified-since header. But there is a problem. He is actually tells me to download from my cache. But I told him explicitly in the IE settings - not to load from cache. Wham am I missing here ? Thanks a lot.

    Read the article

  • grub2 error : out of disk

    - by Nostradamnit
    Hi everyone, I recently install ArchBang on a machine with Ubuntu and XP. I ran update-grub from Ubuntu and it found the new install and created an entry. However, when I try to boot it, I get: error: out of disk error: you need to load kernel first I've tried several things, including adding a new entry in 40_custom, but nothing changes. Here are the entries I have: default found by update-grub ### BEGIN /etc/grub.d/30_os-prober ### menuentry "ArchBang Linux (on /dev/sda4)" { insmod part_msdos insmod ext2 set root='(hd0,msdos4)' search --no-floppy --fs-uuid --set 75f96b44-3a8f-4727-9959-d669b9244f2a linux /boot/vmlinuz26 root=/dev/sda4 rootfstype=ext4 ro xorg=vesa quiet nomodeset swapon initrd /boot/kernel26.img } menuentry "ArchBang Linux Fallback (on /dev/sda4)" { insmod part_msdos insmod ext2 set root='(hd0,msdos4)' search --no-floppy --fs-uuid --set 75f96b44-3a8f-4727-9959-d669b9244f2a linux /boot/vmlinuz26 root=/dev/sda4 rootfstype=ext4 ro xorg=vesa quiet nomodeset swapon initrd /boot/kernel26-fallback.img } ### END /etc/grub.d/30_os-prober ### custom entry in 40_custom based on various ideas found on the internets menuentry "ArchBang Linux (on /dev/sda4)" { insmod part_msdos insmod ext2 set root='(hd0,msdos4)' search --no-floppy --fs-uuid --set 75f96b44-3a8f-4727-9959-d669b9244f2a linux /boot/vmlinuz26 root=/dev/disk/by-uuid/75f96b44-3a8f-4727-9959-d669b9244f2a rootfstype=ext4 ro xorg=vesa quiet nomodeset swapon initrd /boot/kernel26.img } I think the problem has something to do with the sda4 not being mounted at boot-time... Thanks in advance for you help, Sam

    Read the article

  • How do I Forward root's email to an external email address?

    - by ErebusBat
    I have a small server (Ubuntu 10.04) at my house and I would like to forward root's email to my gmail hosted domain to get security notifications and what not. I ripped everything out and started from scratch and ran into some other issues. I now have sendmail working in the sense that I can mail [email protected] and get the mail. HOWEVER, adding an address to /root/.forward does not actually forward the message. I get the following in my logs: Dec 22 14:04:37 batcave sendmail[4695]: oBML4bAT004695: to=<root@batcave>, ctladdr=aburns (1000/1000), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30075, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (oBML4bJ9004696 Message accepted for delivery) Dec 22 14:04:39 batcave sm-mta[4698]: STARTTLS=client, relay=[69.145.248.18], version=TLSv1/SSLv3, verify=FAIL, cipher=DES-CBC3-SHA, bits=168/168 Dec 22 14:04:40 batcave sm-mta[4698]: oBML4bJ9004696: to=<[email protected]>, ctladdr=<[email protected]> (1000/1000), delay=00:00:03, xdelay=00:00:03, mailer=relay, pri=120336, relay=[69.145.248.18] [69.145.248.18], dsn=2.0.0, stat=Sent (OK 01/D4-00853-216621D4) You can see where my local sendmail instance accepts it then hands it off to my ISP, but with the wrong address ([email protected]).

    Read the article

  • Slow connection to Linux MySQL from Windows only (XAMPP)

    - by Josh
    I'm having a problem with a PHP project (using Kohana 3.2 framework) on my Windows 7 64-bit machine connecting to the database. The development database is stored on a Ubuntu Linux server on the local network. Other development machines running OSX and Linux are connecting fine. There are no other Windows development machines to test with. I can access MySQL fine using MySQL Workbench, and other projects (which I believe to be less database heavy) run mostly ok, only occasionally getting timeout messages. I'm constantly getting Maximum execution time of 30 seconds exceeded when functions such as mysql_query() are run in this particular project. Specifically, the Kohana file where the timeout occurs is MODPATH\database\classes\kohana\database\mysql.php [ 186 ]. My local set-up is: Windows 7 Professional 64bit XAMPP 1.7.7 (PHP 5.3.8) The output of uname -a of the Linux server is: Linux peach 2.6.38-11-server #50-Ubuntu SMP Mon Sep 12 21:34:27 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux I've tried the following, with no success: Disabling Windows firewall Switching between using a persistant and normal connection In my.cnf, adding skip-name-resolve Increasing wait_timeout Enabling bind-address I've run out of ideas now, and have no idea how to debug an odd issue like this. Has anyone come across this before, or have any idea how I could find the root of the issue, or what might be the problem?

    Read the article

  • A little guidance setting up FTP server authentication on Windows Server 2008 R2 standard?

    - by Ropstah
    I have a (clean) server running Windows Server 2008 R2 standard. I would just like to use it for serving a website and a FTP server through IIS. IIS is installed and serves my website propery. I have now added a FTP site but when I try to logon using my user/pass i get the following error: 530 User cannot login From this article (http://support.microsoft.com/kb/200475) I understand that these four causes can be pointed out: The Allow only anonymous connections security setting has been turned on in the Microsoft Management Console (MMC). Not the case The username does not have the Log on locally permission in User Manager. The user is in the Users group, however I'm not able to logon through RDP. I tried configuring this by following this article through GPMC however this only works when I'm logged in as a domain user on a domain controller which I'm not: I'm logged in as administrator The username does not have the Access this computer from the network permission in User Manager. Not sure what this implies...? The Domain Name was not specified together with the username (in the form of DOMAIN\username). Tried adding the server name: server\username, not working... I am an absolute server noob and I'd just like to be able to connect through FTP... Any guidance is highly appreciated!

    Read the article

  • Dell PowerEdge T710, add a new hard disk, how to?

    - by user1340802
    I need to add a new hard disk to a PowerEdge T710 running on Vmware EXSI 4. this hard disk is a 'normal' desktop hard disk 1TB (that is it is not coming from Dell, I also have no rack for it to plug it inside any of the front bay) I would like to add this disk for a virtual machine needing space, the most easily as possible. I have find that there is an avaiable sata cable with its electric power, so may I just add the disk plugging these and using the empty 5"1/4 slot available under the CD drive (with a 5"1/4 - 3"1/2 bay adaptater) ? (even if this way it seems that i bypass the raid controller that own the front bay with racks)) that way i think could be easier than adding the disk to the already defined Raid (btw i am also not sure on how to do these but i would not risk to mess the already working things) what are the other operations that i would have to do to ? (sorry I am a real beginner on Vmware EXSI and PowerEdge management :/ i have seen that there is some management from Bios (CTRL+R as start up) so that the disk will be seen or initialize it. I am really not sure of the steps needed...) thank you, best.

    Read the article

  • What exactly is a Deamon ? ( how to run a root command from apache binded script that uses www-data user )?

    - by user224235
    I am trying to run this command from WSGI script service httpd restart The problem is this command can only be run by root and apache uses the www-data user. it has been said the solution is to use a Deamon Process i suppose the idea is to send the command to a file that will be executed by a script that is considered "root" user.. its difficult to understand why they would call this a Deamon Process and try to scare me. Perhaps it should have been called : proxy process when i got the idea that this was a proxy process.. i thought about adding a line to /var/spool/cron/root that way the cron would execute the command for me. but of course this means i have to get the system time and then add 1 second to it and then add it to that line so cron would execute the command for me as root but my script demands an output instantly. so i suppose i need to create a DEAMON PROCESS that works like the cron. in other words it is a bash file that will execute the command in a plain file.. but will this DEAMON PROCESS be running a while command 24/7 every second ? would that not waste resources ? it only needs to activate itself to check for a command to execute when there is a command to be executed. i mean in PHP and other programming languages.. running a while statement when there is nothing to be executed could waste resources of the server.. so why should a deamon process constantly be listening for anything. i only want it to listen and execute when it is needed. i do not need a process that is constantly listening.

    Read the article

  • Strange problem with Google Mail and IMAP on Outlook 2007

    - by Alex C.
    I work for a small non-profit organization. We have about 35 administrative employees who use e-mail. We're on a Windows network with a domain. Everyone is running XP Pro and Office 2007 with all updates/patches. We used to use POP3 mail through a local provider. However, we recently signed-up for a free Google Apps account, and we switched to IMAP mail through Google. Everyone uses Outlook 2007 as the client. For about ten days, everything was working fine. Yesterday afternoon, we suddenly developed a strange and annoying problem. Every time you send an e-mail message, a copy of your outgoing message shows up in your inbox. It's as if you're adding your own address to the CC: line of every message. Nothing has changed on our end. I was hoping that the problem was a temporary glitch that would resolve itself, but here we are about 24 hours later, and it's still happening. I searched Twitter, and there were a handful of vague messages about issues with Google mail and IMAP, but I didn't see any references to this specific problem. Any thoughts on what's going on here and how to fix it?

    Read the article

  • NGINX returning 404 error on a valid url

    - by Harrison
    We have a site that runs PHP-FPM and NGINX. The application sends invitations to site members that are keyed with 40 character random strings (alphanumerics only -- example below). Today for the first time we ran into an issue with this approach. The following url: http://oursite.com/notices/response/approve/1960/OzH0pedV3rJhefFlMezDuoOQSomlUVdhJUliAhjS is returning a 404 error. This url format has been working for 6 months now without an issue, and other urls following this exact format continue to resolve properly. We have a very basic config with a simple redirect to a front controller, and everything else has been running fine for a while now. Also, if we change the last character from an "S" to anything other than a lower-case "s", no 404 error and the site handles the request properly, so I'm wondering if there's some security module that might see something wrong with this specific string... Not sure if that makes any sense. We are not sure where to look to find out what specifically is causing the issue, so any direction would be greatly appreciated. Thanks! Update: Adding a slash to the end of the url allowed it to be handled properly... Would still like to get to the bottom of the issue though. Solved: The problem was caused by part of my configuration... Realized I should have posted, but was headed out of town and didn't have a chance. Any url that ended in say "css" or "js" and not necessarily preceded by a dot (so, for example, http://site.com/response/somerandomestringcss ) was interpreted as a request for a file and the request was not routed through the front controller. The problem was my regex for disabling logging and setting expiration headers on jpgs, gifs, icos, etc. I replaced this: location ~* ^.+(jpg|jpeg|gif|css|png|js|ico)$ { with this: location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { And now urls ending in css, js, png, etc, are properly routed through the front controller. Hopefully that helps someone else out.

    Read the article

  • Perfectly reproducable select statement default ordering issue....

    - by Dave
    Hi, I've recently been chasing an issue with a client's db... solution found, but impossible to recreate. Essentially, we're doing a Select * from mytable where ArbitraryColumn = 75 Where MyTable has an Identity column, called 'MyIndentityColumn' - incremented by one in each insert. Naturally, and normally I would assume that the order returned would be the order in which they are inserted (bad assumption, but one which was forced onto me, through an inherited application - which has been patched). Essentially, I would like suggestions as to why the database, when restored to my local machine (same OS, same SQL server version - 200 sp3) same collation, and same backup instance restored on it, as a test DB on the client site. When I perform the above select, I get them in order of insert (i.e. identity column ordered ascending). On the client, it seems random (but the same 'random' order each time)... A few other points: I have the same collation on my test server as client Same DB backup restored to a test only I can access Same SQL server version and service pack Same OS Test DB is a new DB - new log and MDF... I have the problem 'solved' by adding an explicit order by clause but I want to undertand the cause of the issue, given the exact nature of my attempts to recreate it beuing futile, and perfectly recreatable on the client server... Thanks in advance, Dave

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >