Search Results

Search found 12283 results on 492 pages for 'tcp port'.

Page 429/492 | < Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >

  • High-performance Academic Server [closed]

    - by PHPsmith
    Suppose I want to build a server for the university's academic interests. The server is dedicated only to a site, where users (students and lecturers) just view and fill the academic data. But at a time (e.g. once a semester), about 12,000 students will access the site simultaneously. Due to limitation of resources, I have to build the server using free software (except for the operating system Windows 7, the university has been prepared). The hardware is also limited to the usual 4-core computers (eg, Ivy Bridge Intel Core i7-3770) with approximately 16GB of memory (DDR3 1600 MHz), equipped with an RJ-45 port (Intel 82 579 Gigabit Ethernet). With all these limitations, I have to choose the software (web server, database, etc) are appropriate for this purpose is achieved. I decided to create a site in PHP. Please help me by answering the following questions based on your expertise. (my prime candidate software to consider after googling) Web server which is faster & stable & secure, when implemented and optimized for PHP? And why? (nginx) PHP accelerator which is faster & stable & compatible with the selected web server? And why? (APC with Zend Optimizer+) Database which is faster & stable & secure, when implemented and optimized for selected web server and selected PHP accelerator? (MySQL) Are there any errors that have been or will be happening from my condition is? If there is, please enlighten me? Is there anything else I need to know in order to achieve this goal? If there is, please enlighten me? I understand that the performance also depends on the implementation of source-code program, so I assume it will create a site with the best efficiency (e.g. using AJAX).

    Read the article

  • A very peculiar problem with an old pc and a newer laptop...

    - by user553492
    I got my old pc ( 248mb ram , 80 GB ) repaired and the tech people put XP in it .My newer laptop has UBUNTU 10.04 .now I only have one cable and one usb cord .So I connected my modem (with only one CAT5 port and 4 usb ports ) to laptop with CAT5 cable .Th internet is working fine . I also wanted to use net on older pc so I installed the usb drivers for win and it worked. But I got fed up of win xp and made a separate partition for FreeBSD which I planned to install .During the install I screwd up sumthing and now freebsd starts with a boot option with a ? mark in place of win xp .If I click on that it gives me a "NTLDR missing " msg. I tried connecting CAT5 cable between old and new pc and tried connecting my laptop with USB cable but nothing happend and then I realozed the modem doesnt have a WORKING usb driver for LINUX :( .FCUK ! .Freebsd doesnt` even detect the LAN cable if I use it for old pc . So basically I have a old pc that has FREEBSD which I can olny start and stare at the blank terminal console but works perfectly otherwise .FREEBSD was supposed to detect the LAN cable ??.And I have a laptop that has LINUX which only works if I connect it with a CAT5 cable .wtf . So what can I do with my old pc ??? any local server (if possible :( ) or some such thing ? or can u suggest any use .Im 18 and im into learning programming , coding .So I can practice it .Thankx !

    Read the article

  • Server 2008 R2 & Domain Trusts - Attempt to Compromise Security

    - by SnAzBaZ
    We have two separate Active Directory domains; EUROPE and US. There is a two way trust between the domains / forests. I have a group of users called "USA Staff" that have access to certain shares on servers in the EUROPE domain and a group called "EUROPE Staff" which have access to shares in the USA domain. Recently the USA PDC was upgraded to Windows Server 2008 R2. Now when I try to access a share on a USA server from a Windows 7 workstation in the EUROPE domain I get the "Please enter your username / password" dialog box appear, with a message at the bottom: "The system has detected a possible attempt to compromise security." When I enter a username / password for a user in the USA domain, I can then access the network resource. Entering credentials for a EUROPE user however does not give me access, even though my NTFS and Share permissions are set to allow that. Windows Server 2003 / Windows Server 2008 did not have this problem, it seems to be unique to R2. I found KB938457 and opened up port 88 on the Server 2008 R2 firewall but it did not make any difference. Any other suggestions as to what to turn off in R2 to get this working again ? Thanks

    Read the article

  • Trouble Letting Users Get to Certain Sites through Squid Proxy

    - by armani
    We have Squid running on a RHEL server. We want to block users from getting to Facebook, other than a couple specific sites, like our organization's page. Unfortunately, I can't get those specific pages unblocked without allowing ALL of Facebook through. [squid.conf] # Local users: acl local_c src 192.168.0.0/16 # HTTP & HTTPS: acl Safe_ports port 80 443 # File containing blocked sites, including Facebook: acl blocked dst_dom_regex "/etc/squid/blocked_content" # Whitelist: acl whitelist url_regex "/etc/squid/whitelist" # I do know that order matters: http_access allow local_c whitelist http_access allow local_c !blocked http_access deny all [blocked_content] .porn_site.com .porn_site_2.com [...] facebook.com [whitelist] facebook.com/pages/Our-Organization/2828242522 facebook.com/OurOrganization facebook.com/media/set/ facebook.com/photo.php www.facebook.com/OurOrganization My biggest weakness is regular expressions, so I'm not 100% sure about if this is all correct. If I remove the "!blocked" part of the http_access rule, all of Facebook works. If I remove "facebook.com" from the blocked_content file, all of Facebook works. Right now, visiting facebook.com/OurOrganization gives a "The website declined to show this webpage / HTTP 403" error in Internet Explorer, and "Error 111 (net::ERR_TUNNEL_CONNECTION_FAILED): Unknown error" in Chrome. WhereGoes.com tells me the URL redirects for that URL goes like this: facebook.com/OurOrganization -- [301 Redirect] -- http://www.facebook.com/OurOrganization -- [302 Redirect] -- https://www.facebook.com/OurOrganization I tried turning up the debug traffic out of squid using "debug_options ALL,6" but I can't narrow anything down in /var/log/access.log and /var/log/cache.log. I know to issue "squid -k reconfigure" whenever I make changes to any files.

    Read the article

  • Amavisd-new(2.6.4-3) failing to do "lookup_sql_dsn" when large number of emails are need to be accessed

    - by sandip
    Amavis is failing to do sql lookup when large number of emails are sent to amavis. Its throwing out error after scanning 40 to 50 email. It shows error like. (!!)TROUBLE in process_request: sql exec: err=7, 57P01,DBD::Pg::st bind_param failed:FATAL: terminating connection due to administrator command\nSSL connection has been closed unexpectedly at (eval 103) line 164, <GEN50> line 5. at (eval 104) line 280, <GEN50> line 5. As soon as this error appears in the logs, Amavis stops and port 10024 is closed. Thinking it to an error due to ssl connection in the database(postgresql-8.4), i had stopped ssl in postgres, but it was of no use. I have tried to configure amavis on another server, but i got the same error again. This happening on a production server, So i am not being able to scan emails as per user settings. Anybody have any idea, what may be the source of this error ?? Please help. Thanks in advance

    Read the article

  • Is it possible to record a screen-video from a VNC server?

    - by nikie
    I have a computer that's running VNC server. I would like to record a video of what's going on on this computer, if possible without installing additional software on that computer. Is there a program that can connect to the VNC server port and instead of displaying the screen save it to an (e.g. AVI) video file? Background: One of our customers sometimes has problems with the software he bought from us when he's performing a complex procedure. To help him, we offered that someone (a service technician or programmer) watches what he's doing during that procedure to find out if he's doing something wrong or if there's a bug in the software. Currently, this is done live via VNC. That has a few disadvantages: The service technician has to be in the office at the time. As the customers are in different time zones, that can be in the middle of the night. If the service technician forgets something or doesn't notice something, it's lost. There's no way to see what happened again. Only a single computer can be watched by one service technician at a time. I know I could install normal screen-grab software on the computer, but we're talking about an embedded system with limited RAM, CPU, HDD space, so installing something new is not an easy decision. And VNC is already there. I could of course open a VNC client on some office PC and capture that PC's screen, but I can only record one remote computer that way. I often have to watch up to 8 screens in parallel. (And I don't think that screen-grabbing VNC would improve image quality, either.)

    Read the article

  • Can't remote into Virtual PC

    - by Spamela
    I used to be able to remote into my Virtual PCs. It has been working for at least a year. Yesterday just stopped working... I cannot figure it out... Things I have triple-checked: 1. My Virtual PCs have "Allow Remote Access" checked. 2. My Virtual PCs have an account in the Administrator group that is password protected. 3. My Host's entry in the registry for the Terminal Services Port is still the default of 3389. So here is the strange thing. I can't even remote into the Virtual PC from it's host much less another PC... From the host, I can ping the Virtual PC and get a response but when trying to remote into it from the host I get the following error: Remote Desktop can't connect to the remote computer for one of these reasons: 1)Remote access to the server is not enabled. 2)The remote computer is turned off 3)The remote computer is not available on the network My host is running Windows 7. Virtual PCs are running XP. Thank you for looking at this!

    Read the article

  • Apache2.2 not responding or logging anything on Win 7

    - by Adam
    I'm having some trouble with Apache2.2 on Windows 7. For over a year it's been running no problem, but all of a sudden requests have just stopped responding. They don't time out as such, the browser just keeps on waiting forever. Nothing is recorded in either the error log (set to debug level), the access log, or Windows' Event Log. The problem showed up when I added a new VHost and restarted, however a syntax check has shown there's no problem with the config (from the little I changed), and the service does actually start error free. I've also disabled VHosts and tried with just localhost. I've tried to telnet to the web server, and it connects, but nothing happens. The prompt just goes blank and I can't type anything, and effectively become stuck. I've ensured there's a rule within Windows Firewall for Apache, and I've even disabled the entire thing just to check it wasn't the cause. Still the same. If I stop Apache however, the request fails immediately. I've uninstalled and reinstalled Apache, in the hope it might magically fix something using the default config, but still no joy. I've tried using a different port but nothing different. Does anybody have any suggestions to fix this? Or to perhaps try and figure out either if it's Apache itself not responding or something sitting between the two that's holding things up? I'm not too savvy on debugging Windows issues like this and I've been searching for hours but not found anything of use to me. Cheers Adam

    Read the article

  • How to handle OpenVPN client as a service, when the laptop is physically on the network already?

    - by James
    The Setup I've gotten OpenVPN working on our Windows XP laptops. Users are limited, so I went ahead and set OpenVPN client to run as a service, which is great anyway because that means they are on the VPN before logging in, so login scripts work, plus we can do remote support even if the user can not log in (such as connecting via VNC or resetting passwords). It is also configured to send all traffic over the tunnel, so when, for example, they browse the internet it is just like browsing from our corporate network. The Qestion(s) So, I'm wondering how does the OpenVPN client act when the computer is already physically on the same network as the OpenVPN server? Right now, the client is configured to connect the the public dns name which will resolve to the public ip address which will NOT get reflected back to the OpenVPN server, so it is affectively blocked from connecting to the OpenVPN server while on the network. Is that a good thing? Or will it constantly try to connect, using up system resources and network resources? We will likely have hundreds of laptops regularly on the physical network with this, so it could contribute to a lot of unnecessary network chatter. Alternatively Would it be better to have the firewall reflect the port back to the OpenVPN server and let it connect? Or have our internal dns resolve the name to the private ip and allow them to connect directly? Would traffic then go over the vpn connection (which I do not want, when already on the physical network)? Or is it possible to tell it to ignore the connection when the client and server are already on the same network? TLDR What's a sane way of handling OpenVPN client running as an always-on service when the client and server will often be on the same network?

    Read the article

  • CD Drive not discovered

    - by user1009073
    I have a self built computer. it uses a P6T Deluxe motherboard, which has both SATA and IDE ports. This was built several years ago, and had an IDE CD/DVD drive. This drive started going bad (would not burn CDs correctly), so I decided to replace it. I had difficultly finding an IDE DVD drive, so I bought a SATA DVD drive. I opened the comnputer, took out the old DVD drive. I left the IDE cable in place, connected to the motherboard, but it is not connected to any drives. I hooked up the new DVD drive, both power and with a SATA data cable (SATA port 3 if I recall). (Sony Optiarc 24x , Newegg URL: http://www.newegg.com/Product/Product.aspx?Item=N82E16827118067 ) When I power on my computer, the drive does NOT show up in Explorer. I can hit the DVD eject button, and the drive will open up, so I know it at least is getting power. I thought, maybe something in the BIOS. When I go to BIOS, boot devices, it shows (1) floppy, (2) my hard drive (3) ATAPI CD Drive. The only other possible BIOS option I could find was uder 'Storage Configuration'. Configure Storage as: My setting is RAID, since I am using two drives in a RAID configuration. Other options were IDE and ACHI. Other than trying to find an IDE DVD drive, is there anything else I can try? The drive does not show up at all in Windows Explorer. I did put in a CD thinking that might help, but nothing happened. Thanks, GS

    Read the article

  • Configuring PAM with pam_mount; getting a dlopen() with an HX_Init error

    - by Jamie
    I'm trying to get automounting upon login working on Ubuntu 10.03 Beta 2. I didn't find a package for pam_mount, so I ended downloading it and building it. This required: sudo apt-get install build-essential pkg-config libxml2-dev libssl-dev libpam-dev Additionally, the libHX-dev is required but as of yesterday (23/4/2010) the package version provided (3.2) wasn't up to snuff (3.4) so I downloaded, compiled and installed that too. cd ./pam_mount-1.36/ && ./configure && make && sudo make install When I tried it (pam_mount) I got this in my auth log: Apr 23 12:18:02 ubuntu sshd[1195]: PAM unable to dlopen(/lib/security/pam_mount.so): /lib/security/pam_mount.so: undefined symbol: HX_init Apr 23 12:18:02 ubuntu sshd[1195]: PAM adding faulty module: /lib/security/pam_mount.so Apr 23 12:18:06 ubuntu sshd[1195]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.20.182 user=jrisk Apr 23 12:18:06 ubuntu sshd[1195]: pam_winbind(sshd:auth): getting password (0x00000388) Apr 23 12:18:06 ubuntu sshd[1195]: pam_winbind(sshd:auth): pam_get_item returned a password Apr 23 12:18:06 ubuntu sshd[1195]: pam_winbind(sshd:auth): user 'jrisk' granted access Apr 23 12:18:06 ubuntu sshd[1195]: Accepted password for jrisk from 192.168.20.182 port 4369 ssh2 Apr 23 12:18:06 ubuntu sshd[1195]: pam_unix(sshd:session): session opened for user jrisk by (uid=0) What do I need to do get HX_Init into the system? This is related to an answer I previously got here.

    Read the article

  • Rails application keeps timing out when attempting to connect to Postgresql DB

    - by Corillian
    I'm hosting a postgresql database on a small windows azure Ubuntu 13.04 VM with a default postgresql.conf. I have a Rails application running on a medium windows azure Ubuntu 13.04 VM. When accessing the postgresql database the rails application is constantly timing out. In its database.yml I have the connection pool size set to 120 and the timeout set to 15 seconds. Despite this my rails logs are full of the following error message: ActiveRecord::ConnectionTimeoutError: could not obtain a database connection within 5 seconds (waited 5.0023203 seconds). The max pool size is currently 120; consider increasing it. My postgresql.conf has a max connection limit of 120, making it any larger prevents the server from being able to successfully restart. I've also made sure that ssl was off in the postgresql.conf per this article but beyond that I have no idea what's going on. My postgresql logs don't contain any info indicating something is going wrong. My website is getting ~1k hits per day so perhaps a small VM instance just isn't powerful enough? I appreciate any assistance! [Edit1] The postgresql database is in a separate cloud service within the same affinity group. For example: db small VM: mydatabase.cloudapp.net (Affinity Group US East) forums medium VM: myforums.cloudapp.net (Affinity Group US East) On the database server I have opened port 5432. The connection to the database server from the forums server is using its hostname. Is it possible that the DNS resolution is what's taking so long?

    Read the article

  • lighttpd on Fedora permission issues

    - by Isaac Gateno
    I'm trying to get started with lighttpd on Fedora 16 to run a RESTful api for development. Right now even with the most basic sample config file I'm getting 404 pages when I know the pages I'm pointing at exist. From reading other questions I'm leaning towards this being a permissions issue, but I'm confused about how lighttpd runs on Fedora. There's a user called "lighttpd" not "www-data"? I can't see this user in the system-config-users tool and I can't su into it to check which permissions it has. I'm trying to point lighttpd to "/var/www/lighttpd" which has some example pages in it. The permissions for the files inside are set to -rw-r--r-- and the permissions for the folder containing them are drwxr-xr-x. Doesn't that mean that any user can view these files? I'm not sure what else I should be checking as I don't have much experience with server configuration. Any help would be appreciated. Edit: I was following the tutorial configuration here so the lighttpd.conf file contains server.document-root = "/var/www/lighttpd/" server.port = 3000 mimetype.assign = ( ".html" => "text/html", ".txt" => "text/plain", ".jpg" => "image/jpeg", ".png" => "image/png" ) and I was just trying to get the basic example page working.

    Read the article

  • RabbitMQ Management console not working

    - by rrejc
    I have started with RabbitMQ. I have a (windows) machine on which I installed two RabbitMQ nodes as a service - I have choose the nodename, port and service name for each of them. The services are running normally (i see that they are listening in a netstat-a). I have also installed management plugin with "rabbitmq-plugins enable rabbitmq_management" and restarted both services. But the plugin isn't running - I dont see it listening in a netstat and I can't connect to the management console via browser. Any idea what could be wrong? Is there any log to see what is goind on? Updated: when I do rabbitmq-plugins list i get: c:\RabbitMq\sbin>rabbitmq-plugins list [e] amqp_client 3.0.1 [ ] cowboy 0.5.0-rmq3.0.1-git4b93c2d [ ] eldap 3.0.1-gite309de4 [e] mochiweb 2.3.1-rmq3.0.1-gitd541e9a [ ] rabbitmq_auth_backend_ldap 3.0.1 [ ] rabbitmq_auth_mechanism_ssl 3.0.1 [ ] rabbitmq_consistent_hash_exchange 3.0.1 [ ] rabbitmq_federation 3.0.1 [ ] rabbitmq_federation_management 3.0.1 [ ] rabbitmq_jsonrpc 3.0.1 [ ] rabbitmq_jsonrpc_channel 3.0.1 [ ] rabbitmq_jsonrpc_channel_examples 3.0.1 [E] rabbitmq_management 3.0.1 [e] rabbitmq_management_agent 3.0.1 [ ] rabbitmq_management_visualiser 3.0.1 [e] rabbitmq_mochiweb 3.0.1 [ ] rabbitmq_mqtt 3.0.1 [ ] rabbitmq_old_federation 3.0.1 [ ] rabbitmq_shovel 3.0.1 [ ] rabbitmq_shovel_management 3.0.1 [ ] rabbitmq_stomp 3.0.1 [ ] rabbitmq_tracing 3.0.1 [ ] rabbitmq_web_stomp 3.0.1 [ ] rabbitmq_web_stomp_examples 3.0.1 [ ] rfc4627_jsonrpc 3.0.1-git7ab174b [ ] sockjs 0.3.3-rmq3.0.1-git92d4ba4 [e] webmachine 1.9.1-rmq3.0.1-git52e62bc

    Read the article

  • Setting a subdomain to access home machine with windows remote desktop

    - by ianhales
    I'm trying to remotely connect to home machine through Windows Remote Desktop (amongst other things, but this is currently my primary focus). I can do this fine using my home WAN's static IP (thank god for cable!) with port-forwarding, but I would like to access it from a subdomain of my web-site (e.g. home.mydomain.co.uk). In the cPanel for my hosting account, I've gone into DNS zones and altered the A-record to point to my WAN's IP, which I thought should do the job, but I still cannot connect. When I ping the subdomain, I get my web-host's IP, which I guess is to be expected as I believe the DNS of the host domain is used first, then my server handles the redirection of traffic to the IP in the A-record. Is this the correct idea? Do A-record changes suffer from the same propagation delays as DNS record changes, as I suppose that could explain it? (by the way, this thread confirms my thoughts that setting the A-record should be enough: Hostmonster Subdomain redirected to home server IP: How to ssh into home server using subdomain)

    Read the article

  • Windows 7 SSH file server

    - by Siriss
    Hello all- I have looked at the other posts, but have not quite found an answer I have a question about windows file sharing over SSH. I have copssh installed and it is working for Remote desktop connections. I have port 22 forwarded on my router etc. I connect from a Mac or Putty with this address: ssh -l copsshusername 3391:localhost:3389 [external ip] That works fine. I would like to configure Windows 7 to allow my ssh account that I use to login, access to certain shared folders. I have documents and videos and things that I would like to be able to download externally. I have done this before on Linux and a long time ago on XP, but I cannot figure out what I am missing on Windows 7. There is a designated SSH user that copssh uses to run the service and that I use to to login as. I have googled and googled and have not found a solution that does everything I need that is why I am turning here for ideas. I hope I am explaining this correctly. Thank you very much for your help!

    Read the article

  • Library conflict in Mac OS X

    - by Juan Medín
    I was trying to install the ImageMagick library on Mac OS X Snow Leopard, and first I tried port and, after it failed, homebrew. It updated some dependencies and installed ImageMagick without problems. So far so good. The problem came when I ran Apache. I got the following error in the system log: 07/04/11 12:55:15 org.apache.httpd[41841] httpd: Syntax error on line 115 of /private/etc/apache2/httpd.conf: Cannot load /opt/local/apache2/modules/libphp5.so into server: dlopen(/opt/local/apache2/modules/libphp5.so, 10): Library not loaded: /opt/local/lib/libpng12.0.dylib\n Referenced from: /opt/local/apache2/modules/libphp5.so\n Reason: image not found I checked the /opt/local/lib and surprise! I don't have the libpng12.0 but the libpng14.0. So, as far as I can tell, something went wrong installing the ImageMagick library. Now, I can't find a way to rollback to the previous libraries, other than copying them from the backup. Do you know if is there a way to recover the previous state or reinstall Apache? Or is this just a corrupt state and I must reinstall OS X?

    Read the article

  • How to troubleshoot Linksys E4200 Remote Management

    - by Jordan
    My Linksys E4200 is configured for Remote Management, but the router is not accepting the connections. Here's the configuration under Administration Management Remote Management Access: Remote Management: Enabled Access via: HTTP Remote Upgrade: Disabled Allowed Remote IP Address: Any IP Address Remote Management Port: 8080 The router is setup to use 192.168.10.41 as its static Internet IP address, and 192.168.35.1 as its LAN IP address. I can access the router just fine via its LAN IP address, but I can't make a connection using http://192.168.10.41:8080. I've tried variations of the settings above (enabled HTTPS, enabled Remote Upgrade, set an IP range of 192.168.10.1-254) but nothing has worked yet. Hoping someone can at least point me in the right direction. Thanks. Update: To clarify, I have a wired router that connects straight to the T1 modem. It's configured to use 192.168.10.1-254 as its internal LAN range. The E4200 wireless router in question is on that LAN using 192.168.10.41 as its WAN IP address. The E4200's internal LAN range is 192.168.35.1-254. I'm not trying to access the E4200 from the Internet, I'm just trying to access it from its WAN IP address. Thanks.

    Read the article

  • Redis connection issue

    - by mre
    We are currently experiencing a lot of Redis errors with the message Unable to connect: read error on connection, trying next server We run Redis on FreeBSD using PHP Redis and we have a hard time reproducing the error on Ubuntu so this might be a hint. There's a long-running issue on that topic on github. Basically we get a socket from the operating system with a call to connect(host, port, timeout) in phpredis, but when we do a select(db_index) afterwards, we get an exception. Could there be an issue with persistance? I assume that connect does nothing in the background and select tries to access the connection, which is actually closed. We don't run into a timeout. We tried tuning TIME_WAIT without success. Any other ideas on where the problem might come from? What is the best way to track the issue down? dtrace maybe? Update We are currently looking into our BGSAVE settings. Interestingly it takes half a second and more to create a fork for the process which regularly writes the data to disk (persistence) and maybe redis can't respond to connect() requests during that timespan.

    Read the article

  • Find which files an apache process is writing to?

    - by Haluk
    We have this apache process which becomes io-bound time to time. Using atop, we can see it is a write operation. Using lsof -p <PID> we can see a list of files open by the httpd process. First we thought "log" files must be the problem. So we turned them off just to test. However write operations still continues. We will continue testing a few other things. For instance we use php session variables a lot. Maybe php session files are getting all the writing. But is there a way to quickly identify files which get written to by the httpd process? This way we can focus our efforts on those files. UPDATE: We used the strace command as suggested. Here are two lines from the output. write(23, "\27\0\0\0\3SET CHARACTER SET utf8", 27) = 27 write(23, "\17\0\0\0\3SET NAMES utf8", 19) = 19 We do not have a mysql process on this server. So is strace also showing what is being written to an ethernet port? UPDATE2: During high io load, the process which consumes most of the write resources gives the following output to strace -e trace=write -p <PID>: --- SIGCHLD (Child exited) @ 0 (0) --- write(9, "!", 1) = 1 write(19, "OPTIONS * HTTP/1.0\r\nUser-Agent: Apache (internal dummy connection)\r\n\r\n", 70) = 70 However I cannot figure out where these are being written to.

    Read the article

  • Why is my rsync so slow compared to pure cp or even scp?

    - by nfm
    I'm transfering the files from Linux to Windows 7 via a mounted share (the share is mounted from Windows on Linux).. I'm copying lots of data (i.e. nearly a TB) from the old to the new machine within my LAN. I'm unfortunate enough already that I only have 100MBit. Naturally I blindly used rsync but already wondered after a day why it feels so slow. Enabling the progress meter showed my a transfer rate of about 2MBit/s . So I took a reasonable big file (800MB) and tracked the transfer timing: cp : 05:33 scp (*): 06:33 rsync : 21:51 *) scp via localhost to the same Linux machine directly onto the share; completely useless but provided a progress meter The tests were as simple as (cp|scp|rsync) <source> <destination> No special arguments except host/port for scp. I even tried the -W switch for rsync but cancelled after ten minutes. rsync is 3.0.3 running on Lenny. To be able to interrupt the copy process anytime and resume lead me to rsync, but now I think I seriously need to reconsider this requirement. How's such a big difference possible?

    Read the article

  • redirecting output from telnet / nc to file in script fails when cron'd

    - by qhartman
    So, I have device on my network which sits there listening on a port for a connection, and when a connection is made it dumps ascii data out. I need to capture that data to a file. I wrote a dead simple shell script that does this: #!/bin/bash #Config Variables. Age is in Days. DATA_ROOT=/root/data FILENAME=data_`date +%F`.dat HOST=device COMPRESS_AGE=3 #Sanity Checks if [ ! -e $DATA_ROOT ] then echo "The directory $DATA_ROOT seems to not exist. Please create it." exit 1 fi if [ -e $DATA_ROOT/$FILENAME ] then echo "You seem to have extracted data already today. Aborting" exit 1 fi #Get Data nc $HOST 2202 > $DATA_ROOT/$FILENAME #Compress old Data find $DATA_ROOT -type f -mtime +$COMPRESS_AGE -exec gzip {} \; exit 0 It works great when I run it by hand, but when I run it from cron, it doesn't capture any of the output. If I replace nc with telnet I see the initial telnet headers about escape sequences and whatnot, but not the data. Ideas? I've tried forcing bash to act like an interactive shell with -i. I've tried redirecting both stderr and stdout. I know it's got to be some silly simple thing, but I'm utterly failing. This is driving me nuts... EDIT I also just noticed that the nc processes from all my previous attempts at this have been siting sleeping, and when I killed them, cron sent me a bunch of non-sensical error messages. At least now I have something to dig into!

    Read the article

  • Install mod_perl2 on Apache 2.2.14 (Ubuntu10.04)

    - by MICADO
    Hi guys, I have installed via synaptic package ibapache2-mod-perl2. I tried this line in httpd.conf: "LoadModule perl_module modules/mod_perl.so" Apache tells me when I reload the server : "[warn] module perl_module is already loaded, skipping". Well ok, but when i try to look in the browser to a repertory i don't have access .Apache send me the error : Forbidden You don't have permission to access /cgi-bin/ on this server. Apache/2.2.14 (Ubuntu) Server at 192.168.0.10 Port 90 But this should show modperl is installed and that's not the case... I would like my virtual host that follows run with mod_perl2 <VirtualHost v1:80> ServerAdmin webmaster@localhost ServerName v1 DocumentRoot /var/www/v1 <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/v1/html/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /var/www/v1/cgi-bin/ <Directory "/var/www/v1/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> I'd like to know how to configure mod_perl2. Do i have to change something in the apache configuration file to make my cgi repertory works with mod_perl2? Thanks to any help!

    Read the article

  • Trying to Set up SMTP Server on WIndows Server 2012

    - by datc
    I'm working on a website, and I need to test the functionality of sending email messages from ASP.NET, something like this: Dim msg As New MailMessage("email1", "email2") msg.Subject = "Subject"<br> msg.IsBodyHtml = True<br> msg.Body = "Click <a href='site'>here</a>." Dim client As SmtpClient = New SmtpClient() client.Host = "My-Server"<br> client.Port = 25<br> client.DeliveryMethod = SmtpDeliveryMethod.Network<br> client.Send(msg) This is running from a Windows 8 workstation. I've installed SMTP server on my Windows Server 2012 machine. The mail shows up in the mailroot/Queue folder and sits there, eventually getting deposited into Badmail. Now I have AT&T U-verse at home, and a few devices connected to the gateway, including let's call it "My-Server." When I run SmtpDiag from say, datc@... to [email protected] I get SOA serial number match passed, Local DNS (99-135-60-233.lightspeed.bcvloh.sbcglobal.net) & Remote DNS (hotmail.com) tests *not* passed, and ultimately, Connecting to the server failed. Error: 10060. Failed to submit mail to mx2.hotmail.com error. When I set My-Server's IP to static and equal to the external IP, 99.135.60.233, and again run SmtpDiag, I get SOA, Local DNS, and Remote DNS tests passed, but the same 10060 error. Same for yahoo.com, gmail.com, and so forth. Is it my ISP's job to fix this? Some PTR record missing somewhere? Is it at all possible to have a home-based SMTP server? All I want is to test my email code. Perhaps, my IP address is just not "trusted" somehow. Thanks.

    Read the article

  • gitweb refusing to blame

    - by Slipp D. Thompson
    I'm attempting to get gitweb (git 1.8.4.2, via git instaweb) in a project dir on my Debian server to offer blame views. In my /etc/gitweb.conf: … # default logo, favicon, etc. settings $feature{'blame'}{'default'} = [1]; $feature{'pickaxe'}{'default'} = [1]; $feature{'snapshot'}{'default'} = ['tgz', 'txz', 'zip']; $feature{'highlight'}{'default'} = [1]; $feature{'pathinfo'}{'default'} = [1]; In my global config file: [gitweb] blame = true snapshot = tgz, txz, zip patches = 256 avatar = gravatar [instaweb] local = false httpd = apache2 -f port = 4321 In my project's .git/config file: [gitweb] blame = true And yet, when I try to load a git blame view (via hand-modifying the URL to http://myserversip:4321/?p=.git;a=blame;f=Tests/InchCoordProxyTests.m;h=b4b2…;hb=53b4, since blame action links don't show up): Doing a quick search for “Blame view not allowed” in the gitweb.cgi source reveals plainly that the gitweb_check_feature('blame') conditional is failing. What am I doing wrong? Or, is there a way to verbosely print out why gitweb is doing what it's doing (e.g. which config files were read, which settings were loaded from each file, etc.)?

    Read the article

< Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >