Search Results

Search found 60540 results on 2422 pages for 'i cant tell you my name'.

Page 182/2422 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • Apache2/mod_fcgid/PHP Process Limits Not Respected

    - by Daniel
    I've recently moved to Apache2 / mod_fcgid / PHP from nginx / php_fpm. This is the second server on which I've made this migration, but it's used much less frequently than the first, which is working like a charm. The problem is in the PHP processes that it's spawning. In looking at the mod_fcgid documentation, it appears that the default for killing idle processes is 300 seconds; I've changed that to 20. At this point, I'd be fine if 300 would work - but it's not happening. It's been running for nearly a day now, and server-status shows 12 active processes: Process name: php5 Pid Active Idle Accesses State 19243 84879 14420 11 Ready Process name: php5 Pid Active Idle Accesses State 20954 82143 149 22 Ready 20947 82149 149 22 Ready 20953 82143 149 13 Ready Process name: php5 Pid Active Idle Accesses State 20589 82765 23644 72 Ready Process name: php5 Pid Active Idle Accesses State 17663 86103 2034 117 Ready Process name: php5 Pid Active Idle Accesses State 19862 83961 1976 91 Ready Process name: php5 Pid Active Idle Accesses State 18495 85825 5164 18 Ready Process name: php5 Pid Active Idle Accesses State 25463 75109 23948 24 Ready Process name: php5 Pid Active Idle Accesses State 2466 60019 60016 2 Ready Process name: php5 Pid Active Idle Accesses State 20729 82541 12592 23 Ready Process name: php5 Pid Active Idle Accesses State 22135 80616 46361 6 Ready PHP applications are not being served at this point - Apache is returning a 503. However, it is still serving the server-status module, and mod_mono/Mono 2.10 applications are still being served. The problem is with the PHP. /etc/apache2/mods-available/fcgid.conf... <IfModule mod_fcgid.c> AddHandler fcgid-script .fcgi FcgidConnectTimeout 10 FcgidMaxRequestsPerProcess 500 FcgidIdleTimeout 20 FcgidFixPathinfo 1 FcgidMaxProcesses 10 </IfModule> (heh - Max Processes isn't being respected either...) Of course, fcgid.conf is smylinked in mods-enabled.

    Read the article

  • Is it bad to have the Reverse DNS for two IPs point to the same domain name?

    - by Daniel Vandersluis
    I am in the process of setting up a new server for my domain (the site will be moved, it is not for load balancing or the like), which has a different IP address from my existing server. My current server has a reverse DNS PTR record set up pointing its IP to mydomain.com. Is it bad to set up a reverse DNS PTR record for the new IP pointing to mydomain.com as well? Or should I wait until I do my migration to set up the record?

    Read the article

  • bash/sed/awk/etc remove every other newline

    - by carillonator
    a bash commands outputs this: Runtime Name: vmhba2:C0:T3:L14 Group State: active Runtime Name: vmhba3:C0:T0:L14 Group State: active unoptimized Runtime Name: vmhba2:C0:T1:L14 Group State: active unoptimized Runtime Name: vmhba3:C0:T3:L14 Group State: active Runtime Name: vmhba2:C0:T2:L14 Group State: active I'd like to pipe it to something to make it look like this: Runtime Name: vmhba2:C0:T1:L14 Group State: active Runtime Name: vmhba3:C0:T3:L14 Group State: active unoptimized Runtime Name: vmhba2:C0:T2:L14 Group State: active [...] i.e. remove every other newline I tried ... |tr "\nGroup" " " but it removed all newlines and ate up some other letters as well. thanks

    Read the article

  • b Is it bad to have the Reverse DNS for two IPs point to the same domain name?

    - by Daniel Vandersluis
    I am in the process of setting up a new server for my web application (the site will be moved, it is not for load balancing or the like), which has a different IP address from my existing server. My current server has a reverse DNS PTR record set up pointing its IP to mydomain.com. Is it bad to set up a reverse DNS PTR record for the new IP pointing to mydomain.com as well? Or should I wait until I do my migration to set up the record? Update: I forgot to mention, the A record for the mydomain.com points to the old server's IP address, not the new one, if it matters.

    Read the article

  • Does sending e-mail in the name of customers increase the risk of being marked as spammer?

    - by Adrian Grigore
    Hi, We are developing a SaaS website application that lets users send invoices to their clients. Ideally, these e-mails should appear to be originating from our customers, so the sender e-mail address domain will not match the reverse IP entry for our server. In effect we would be forging their e-mail address, but of course with their consent. Will that result in a higher probability of being marked as a spammer / their e-mails being marked as spam? If yes, how bad is the penalty? And what about people who have an e-mail address originating form an SPF-enabled domain? I guess it should be the majority of the big e-mail providers.

    Read the article

  • How to edit known_hosts when several hosts share the same IP and DNS name?

    - by Frédéric Grosshans
    I regularly ssh into a computer which is a dual-boot OS X / Linux computer. The two OS instance do not share the same host key, so they can be seen as two host sharing the same IP and DNS. Let's say the IP is 192.168.0.9, and the names are hostname and hostname.domainname As far as I understood, the solution to be able to connect to the two host is to add them both to the ~/.ssh/know_hosts file. However, it is easier said than done, because the file is hashed, and has probably several entries per host (192.168.0.9, hostname, hostname.domainname). As a consequence, I have the following warning Warning: the ECDSA host key for 'hostname' differs from the key for the IP address '192.168.0.9' Is there an easy way to edit the known_hosts file, while keeping the hashes. For example, how can I find the lines corresponding to a given hostame? How can I generate the hashes for some known hosts? The ideal solution would allow me to connect to seamlessly to this computer with ssh, no matter whether I call it 192.168.0.9, hostname or hostname.domainname, nor if it uses its Linux hostkey or its OSX hostkey. However, I still want to receive a warning if there is a real man-in-the middle attack, i.e. if another key than these two is used.

    Read the article

  • Puppet - Is it possible to use a global var to pull in a template with the same name?

    - by Mike Purcell
    I'm new to puppet. As such I am trying to work my way around the best way to setup my manifests that make sense. Following the DRY (don't repeat yourself) principle, I am trying to load common directives in one template, then load in environment specific directives from a file matching the environment. Basically like this: # nodes.pp node base_dev { $service_env = 'dev' } node 'service1.ownij.lan' inherits base_dev { include global_env_specific } class global_env_specific { include shell::bash } # modules/shell/bash.pp class shell::bash inherits shell { notify{"Service env: ${service_env}": } file { '/etc/profile.d/custom_test.sh': content => template('_global/prefix.erb', 'shell/bash/global.erb', 'shell/bash/$service_env.erb'), mode => 644 } } But every time I run puppet agent --test puppet complains that it can't find the shell/bash/$service_env.erb file, but I double checked that it exists. I know the var is accessible due to the notify statement outputting the expected value, so I suspect I am doing which is not allowed. I know I could have a single template.erb and pass variables to the template, which would work in this case because the custom.sh file is small and not many changes across environments, but for more complex configs (httpd, solr, etc) I'd prefer to access environment specific files. I am also aware that I can specify environment specific module paths, but I'd prefer to just handle this behavior at the template level, instead of having several, closely named directories. Thanks.

    Read the article

  • What is the correct cipher name for RC4 in Chrome?

    - by qbi
    I want to remove RC4 from Google Chrome and found the commandline option --cipher-suite-blacklist. However I wasn't able to figure out what the correct notation for RC4 is. Whatever I tried so far only brought the message: ERROR:ssl_config_service_manager_pref.cc(55)] Ignoring unrecognized or \ unparsable cipher suite: Even the names listed in ssl_cipher_suite_names.cc don't work. What should I enter to remove RC4 as a cipher for SSL/TLS? I'm working with some different versions of GNU/Linux and sometimes also with Windows. So it would be nice if the command-line argument would work under all OSes. I used the following command: chrome --cipher-suite-blacklist=TLS_RSA_WITH_RC4_128_MD5 --ssl-version-min=tls1.1 chrome --cipher-suite-blacklist=RC4 --ssl-version-min=tls1.1 chrome --cipher-suite-blacklist=0xXYZ,0xUVW --ssl-version-min=tls1.1 # XYZ and UVW are some hexadecimal numbers

    Read the article

  • Why is awstats reporting my static IP instead of Domain Name?

    - by Austin
    In AWStats under: "Links from an external page (other web sites except search engines)" it has generated a list of pages that had linked to my page. I see pages like: Bing, YouTube, HotFrog, etc.. However, there are many internal links within the pages. Towards the bottom it is reporting as followed: http://72.249.150.9/distributors.php 5 2.4 % http://72.249.150.9/contact/ 5 2.4 % http://72.249.150.9/catalog/ 4 1.9 % http://72.249.150.9/flex-point-hockey-grip.php 5 2.4 % http://72.249.150.9/sticky-grip-foam.php 5 2.4 % http://72.249.150.9/video.php 10 4.8 % http://72.249.150.9/dealers/ 5 2.4 % http://72.249.150.9/feedback/ 5 2.4 % http://72.249.150.9/products.php 10 4.8 % http://72.249.150.9/ergo-hockey-grip.php 5 2.4 %

    Read the article

  • Assign fixed IP address via DHCP by DNS lookup

    - by Janoszen
    Preface I'm building a virtualization environment with Ubuntu 14.04 and LXC. I don't want to write my own template since the upgrade from 12.04 to 14.04 has shown that backwards compatibility is not guaranteed. Therefore I'm deploying my virtual machines via lxc-create, using the default Ubuntu template. The DNS for the servers is provided by Amazon Route 53, so no local DNS server is needed. I also use Puppet to configure my servers, so I want to keep the manual effort on the deployment minimal. Now, the default Ubuntu template assigns IP addresses via DHCP. Therefore, I need a local DHCP server to assign IP addresses to the nodes, so I can SSH into them and get Puppet running. Since Puppet requires a proper DNS setup, assigning temporary IP addresses is not an option, the client needs to get the right hostname and IP address from the start. Question What DHCP server do I use and how do I get it to assign the IP address based only on the host-name DHCP option by performing a DNS lookup on that very host name? What I've tried I tried to make it work using the ISC DHCP server, however, the manual clearly states: Please be aware that only the dhcp-client-identifier option and the hardware address can be used to match a host declaration, or the host-identifier option parameter for DHCPv6 servers. For example, it is not possible to match a host declaration to a host-name option. This is because the host-name option cannot be guaranteed to be unique for any given client, whereas both the hardware address and dhcp-client-identifier option are at least theoretically guaranteed to be unique to a given client. I also tried to create a class that matches the hostname like this: class "my-client-name" { match if option host-name = "my-client-name"; fixed-address my-client-name.my-domain.com; } Unfortunately the fixed-address option is not allowed in class statements. I can replace it with a 1-size pool, which works as expected: subnet 10.103.0.0 netmask 255.255.0.0 { option routers 10.103.1.1; class "my-client-name" { match if option host-name = "my-client-name"; } pool { allow members of "my-client-name"; range 10.103.1.2 10.103.1.2; } } However, this would require me to administer the IP addresses in two places (Amazon Route53 and the DHCP server), which I would prefer not to do. About security Since this is only used in the bootstrapping phase on an internal network and is then replaced by a static network configuration by Puppet, this shouldn't be an issue from a security standpoint. I am, however, aware that the virtual machine bootstraps with "ubuntu:ubuntu" credentials, which I intend to fix once this is running.

    Read the article

  • How to block a user in apache httpd server from accessing a *.php file inside a Directory, instead user should access this using Directory name

    - by Oxi
    My requirement looks Simple, But Googling Did not help me yet. my query is i want to Throw a 404 page to a user(Not Re-Direct to another folder or file), who is trying to Access *.php files in my website ex: when a client asks for www.example.com/home/ i want to show the content , but when user says www.example.com/home/index.php i want to show a 404 page. i tried different methods, nothing worked for me, one of which tried is shown below <Directory "C:/xampp/htdocs/*"> <FilesMatch "^\.php"> Order Deny,Allow Deny from all ErrorDocument 403 /test/404/ ErrorDocument 404 /test/404/ </FilesMatch> </Directory> Thanks in Advance

    Read the article

  • Windows 7 comments field missing when browsing network

    - by Toymangenie
    I have just purchased three Windows 7 Professional Dell 64-bit PCs for testing prior to upgrading our company’s 120+ PCs from Windows XP Professional. The setup is a standard domain with a Windows Server 2003 32-bit server. We name each PC XP1 to XP150 so that when users join or leave, I don’t have to rename the PC. We use the Description field to allocate the user’s name to each PC. We also have a share set up on each PC using the user’s name. When I browse the network using Windows Explorer in XP, I get a useful display. The left pane showing the PC number and the right pane showing NAME and COMMENTS So, for example I would see: XP01 Fred Bloggs (Each PC on a new row.) The right pane is my main tool for administering the network. I can easily see the PC number and the name of the user. However, in Windows 7, this seems to have been thrown out of the window and replaced with fields that I do not need and in my case always display the same info. "Name", "Category", "Workgroup", "Network Location" In my case the Name column gives the PC number (XP10) etc and all three other columns display identical useless information. So I can’t see who is using XP10. When I am in “help desk” mode, I would naturally ask the user’s name and use my remote desktop client to view their screen. The user isn’t aware of their PC name, so I am finding it impossible to match the user name with a PC number. Any ideas how to overcome this "by design" change to Windows 7?

    Read the article

  • How the heck is http://to./ a valid domain name?

    - by Chris
    Apparently it's a URL shortener. It resolves just fine in Chrome and Firefox. How is this a valid top-level domain? Update: for the people saying it's browser shenanigans, why is it that: http://com./ does not take me to: http://www.com/? And, do browsers ever send you a response from some place other than what's actually up in the address bar? Aside from framesets and things like that, I thought browsers tried really hard to send you content only from the site in the address bar, to help guard against phishing.

    Read the article

  • Why is this setting for Name-based Virtual Host settings not working?

    - by Kave
    I have two domains (siteA.com & SiteB.com) that point to the same webserver and I would like to show different web pages for each. The steps I have taken so far are: Copy the default site (siteA) to siteB 1) sudo cp /etc/apache2/sites-available/default /etc/apache2/sites-available/siteB 2) sudo vim /etc/apache2/sites-available/siteB <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/siteB <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/siteB> Options Indexes FollowSymLinks MultiViews AllowOverride FileInfo Indexes Order allow,deny allow from all </Directory> </VirtualHost *:80> Then I created under /var/www/siteB and created a sample index.html in there. However when I load my domain siteB.com I still get directed to /var/www/siteA. Why is that? Do I have to rename the /etc/apache2/sites-available/default to /etc/apache2/sites-available/siteA as well? UPDATE: Thanks to the answer below it seems I had forgotten next to enabling the site also another entry: <VirtualHost *:80> ServerAdmin [email protected] ServerName siteB.com ServerAlias www.siteB.com </VirtualHost *:80> in order to include all subdomains as well then do: <VirtualHost *:80> ServerAdmin [email protected] ServerName siteB.com ServerAlias *.siteB.com </VirtualHost *:80> Same goes for siteA.

    Read the article

  • What does the directory-name: '~MntWIM' on my Windows 7 / C:\ drive mean?

    - by J Puk
    This '~MntWIM' on my Windows 7 / C:\ drive, is sized: 694 MB (728.418.589 bytes) on harddrive. It contains 3 subdirectories. 1st = Program Files, containg zero volume 2nd = Users, containing zero volume 3rd = Windows, containing 693 MB (726.823.197 bytes) on harddrive It all looks a bit useless to me, so question is: Is it safe to delete the lot? Or does it have an important function there? Hoping for an answer from which I could lern something. B.R. JP

    Read the article

  • How to sort folders by last characters of folder name in Windows 7?

    - by nigelc
    I have a stack of client folders with names like this: eastcoal 008 mee 022 orr 047 owaka 032 owen 025 powernet006 redpath 031 Normally this works, but sometimes I'd like to sort by the number. So how can I sort folders by the last characters in the folder string? Characters will always be three consecutive numbers, and will run from 001 to 999 (until I've done a thousand jobs, which will take years).

    Read the article

  • Retrieve a domain name based on an IP Address?

    - by Neil Kodner
    I'm reviewing some apache logs, specifically with respect to downloaded files. I'm interested in knowing, if possible, which domain is responsible for the download, given an IP address. I've given nslookup a try and it seems to (mostly) get the job done but it returns all sorts of extraneous information. Ideally, I pass in an IP and receive a domain back. Before I write a shell script to parse the output of nslookup to capture the domain, I'd like to know if this is the best way of approaching this problem, or if there is a more tried-and-true method of doing this. Specifically, I'd like to know if an address resolves to an amazonaws.com domain. I understand that this might be difficult because EC2 machines are dynamically created and destroyed - I'd like to know if the IP addresses for AWS/EC2/EMR machines fit any sort of addressing pattern.

    Read the article

  • Unable to reach files in subfolder with domain name in path in IIS 5.

    - by Chuck Conway
    In IIS 5 files in the url: http://acme.com/_cache/cache-www.acme.com/v3.css are not accessible. All files below "cache-www.acme.com" are unreachable. I've verified that the files exists. Permissions are not a problem. I've assigned "Everyone" to the files and give "Everyone" full rights. What I have determined is in IIS 5 if there is a domain in the folder path, IIS 5 gets confused... Other javascript files outside the directory comedown fine... Any thoughts?

    Read the article

  • What's the proper way to setup a client chosen domain name?

    - by Greg
    In my web app, I'm toying with the idea of giving my user the opportunity to select a subdomain of their choosing, so they could select something like: foobar.myapp.com where foobar is their chosen subdomain. What is the proper way to go about setting up something like this? .htaccess? Have some api for writing virtual hosts? The application would still always map to one directory on my sever, I just want to give theme a custom URL.

    Read the article

  • Nginx redirect one path to another

    - by SteveEdson
    I'm sure this has been asked before, but I can't find a solution that works. A website has switched CMS services, but has the same domain, how do I set up an nginx rewrite for a single page? E.g. Old Page http://sitedomain.co.uk/content/unique-page-name New page http://sitedomain.co.uk/new-name/unique-page-name Please note, I don't want everything within the content page to be redirected, but literally just the url mentioned above. I have about 9 redirects to set up, non of which fit in a pattern. Thanks! Edit: I found this solution, which seems to be working, except for the fact that it redirects without a slash: if ( $request_filename ~ content/unique-page-name/ ) { rewrite ^ http://sitedomain.co.uk/new-name/unique-page-name/? permanent; } But this redirects to: http://sitedomain.co.uknew-name/unique-page-name/

    Read the article

  • Is there a way to setup a hotspot with a domain name rather than IP address?

    - by WagnerMatosUK
    Basically I've setup a hotspot and its currently being accessed through an IP address. I'd like to use a hostname instead. This is for internal use only, meaning the ODROID device which is being used to as Access Point is connected to the internet via ethernet and only a few devices will access the AP. My setup details: Arch Linux on an ODROID U3 device, using hostapd and dhcp server. PS: I'm quite inexperienced with network so I might be missing something obvious here. Thanks in advance

    Read the article

  • What are the practical limits on file extension name lengths?

    - by GorillaSandwich
    I started using DOS back before Windows, and ever since have taken it for granted that Every file has a file extension, like .txt, .jpg, etc That extension is always short (usually 3 letters) I learned early that the extension is basically just a hint to the OS as to what the content type is. Eventually I got exposed to Mac and Linux, files with no extensions, etc. And of course I've seen shorter extensions, like .rb and .py. I just noticed that markdown-formatted files can have the extension .markdown, and it made me wonder - how long can that extension be? If I make it .mycrazylongextensiontypewoohoo, will certain operating systems or programs choke on the file? Are extension names generally short just for convenience, or is this based on some limitation, legacy or current?

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >