Search Results

Search found 22893 results on 916 pages for 'client scripting'.

Page 789/916 | < Previous Page | 785 786 787 788 789 790 791 792 793 794 795 796  | Next Page >

  • Why is my Drupal Registration email considered spam by gmail? (headers included)

    - by Jasper
    I just created a Drupal website on a uni.cc subdomain that is brand-new also (it has barely had the 24 hours to propagate). However, when signing up for a test account, the confirmation email was marked as spam by gmail. Below are the headers of the email, which may provide some clues. Delivered-To: *my_email*@gmail.com Received: by 10.213.20.84 with SMTP id e20cs81420ebb; Mon, 19 Apr 2010 08:07:33 -0700 (PDT) Received: by 10.115.65.19 with SMTP id s19mr3930949wak.203.1271689651710; Mon, 19 Apr 2010 08:07:31 -0700 (PDT) Return-Path: <[email protected]> Received: from bat.unixbsd.info (bat.unixbsd.info [208.87.242.79]) by mx.google.com with ESMTP id 12si14637941iwn.9.2010.04.19.08.07.31; Mon, 19 Apr 2010 08:07:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of [email protected] designates 208.87.242.79 as permitted sender) client-ip=208.87.242.79; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of [email protected] designates 208.87.242.79 as permitted sender) [email protected] Received: from nobody by bat.unixbsd.info with local (Exim 4.69) (envelope-from <[email protected]>) id 1O3sZP-0004mH-Ra for *my_email*@gmail.com; Mon, 19 Apr 2010 08:07:32 -0700 To: *my_email*@gmail.com Subject: Account details for Test at YuGiOh Rebirth MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed; delsp=yes Content-Transfer-Encoding: 8Bit X-Mailer: Drupal Errors-To: info -A T- yugiohrebirth.uni.cc From: info -A T- yugiohrebirth.uni.cc Message-Id: <[email protected]> Date: Mon, 19 Apr 2010 08:07:31 -0700 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - bat.unixbsd.info X-AntiAbuse: Original Domain - gmail.com X-AntiAbuse: Originator/Caller UID/GID - [99 500] / [47 12] X-AntiAbuse: Sender Address Domain - bat.unixbsd.info X-Source: X-Source-Args: /usr/local/apache/bin/httpd -DSSL X-Source-Dir: gmh.ugtech.net:/public_html/YuGiOhRebirth

    Read the article

  • Window 7 Host does not answer to ping

    - by gencha
    Today I tried printing on a shared printer on one of our homegroup members. Sadly it did not work (printer marked as offline). Shortly after, I noticed I can't even ping the machine that owns the printer (I also can not remotely access it in any other way I've tried). Currently I'm trying to ping the machine from the router both computers are connected to (and my machine in question doesn't answer). I do receive the echo requests (as verified with WireShark). I also added a rule in the Windows Firewall to specifically allow ICMP echo requests, but that didn't change anything. I also tried netsh firewall set icmpsetting 8 enable, but that didn't change anything either. Completely disabling the Windows Firewall has no effect on the issue either. One has to wonder, where does Windows log when and why it ignored any incoming packets? How can I get to the bottom of this? Here are some ways I found to dig deeper into the issue: Enabling logging on the Windows Firewall Enabling Windows Filtering Platform Auditing Both methods at least give more insight into the issue. The plain log file is full of entries like this: 2011-11-11 14:35:27 DROP ICMP 192.168.133.1 192.168.133.128 - - 84 - - - - 8 0 - RECEIVE So the ICMP packets are being dropped as if that was intended. The Event Viewer now gives a little bit more details: The Windows Filtering Platform has blocked a packet. Application Information: Process ID: 4 Application Name: System Network Information: Direction: Inbound Source Address: 192.168.133.1 Source Port: 0 Destination Address: 192.168.133.128 Destination Port: 8 Protocol: 1 Filter Information: Filter Run-Time ID: 214517 Layer Name: Receive/Accept Layer Run-Time ID: 44 This same entry is always repeated with 2 points of information changing: Process ID: 420 Application Name: \device\harddiskvolume2\windows\system32\svchost.exe The service host with the PID 420 is the host for the following services: Windows Audio DHCP Client Windows Event Log HomeGroup Provider TCP/IP NetBIOS Helper Security Center Additionally, there is currently this problem with the same machine: Even though my network is set to be a "Home network", I am unable to create a new homegroup.

    Read the article

  • How to access a port via OpenVpn only

    - by Andy M
    I've set up an openvpn server alongside an apache website that can only be accessed on port 8100 on the same machine. My /etc/openvpn/server.conf file looks like this: port 1194 proto tcp dev tun ca ./easy-rsa2/keys/ca.crt cert ./easy-rsa2/keys/server.crt key ./easy-rsa2/keys/server.key # This file should be kept secret dh ./easy-rsa2/keys/dh1024.pem # Diffie-Hellman parameter server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt # make sure clients can still connect to the internet push "redirect-gateway def1 bypass-dhcp" keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 3 Now I tried to let only clients connected to the vpn network access the website on apache via port 8100. So I defined a few iptables rules: #!/bin/sh # My system IP/set ip address of server SERVER_IP="192.168.0.2" # Flushing all rules iptables -F iptables -X # Setting default filter policy iptables -P INPUT DROP iptables -P OUTPUT DROP iptables -P FORWARD DROP # Allow incoming access to port 8100 from OpenVPN 10.8.0.1 iptables -A INPUT -i tun0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o tun0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT # outgoing http iptables -A OUTPUT -o tun0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i tun0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT Now when I connect to the server from my client computer and try to access the website on 192.168.0.2:8100, my browser can't open it. Will I have to forward traffic from tun0 to eth0? Or is there anything else I'm missing?

    Read the article

  • Is it possible for the Subversion Apache module to serve html files with an html content-type without using the svn:mime-type property?

    - by Martin Pain
    I am aware that if you set the svn:mime-type Subversion property on a .html file to text/html then when viewing the file in a browser through the Subversion module in Apache httpd it will be served with a Content-Type: text/html header, enabling the browser to render it as HTML rather than plain text. However, I am looking for a way to do this without using the svn:mime-type property. I'm aware that you can configure your svn client to automatically add the property - this is not what I want, as I do not want to ensure all users have these settings. I'm also aware that I could create a pre-commit hook that rejects the commit if the properties are not set, in order to force users to set the property - I might fall back to that, but I'm looking for something less intrusive. I'm also aware that I could use a post-commit hook to add the properties automatically on the server-side. I'd rather not do that (as users then have to update immediately after their commit, and it's not trivial to write) - I'm looking for a better alternative. Perhaps something with rewrite rules in the Apache server?

    Read the article

  • ipv6 reverse DNS delegation

    - by user1709492
    I currently have 2001:1973:2303::/48 assigned to me and i'll be assigning /64's to customer's I'd like to have 1 zonefile for the /48 where i can essentially point / redirect query to different nameservers. Example ( Desired effect ) 2001:1973:2303:1234::/64 -> ns1.example.com, ns2.example.com 2001:1973:2303:2345::/64 -> ns99.example2.com, ns100.example2.com 2001:1973:2303:4321::/64 -> ns1.cust1.com, ns2.cust1.com Current /48 zonefile $TTL 3h $ORIGIN 3.0.3.2.3.7.9.1.1.0.0.2.ip6.arpa. @ IN SOA ns3.example.ca. ns4.example.ca. ( 2011071030 ; serial 3h ; refresh after 3 hours 1h ; retry after 1 hour 1w ; expire after 1 week 1h ) ; negative caching TTL of 1 hour IN NS ns3.example.ca. IN NS ns4.example.ca. 1234 IN NS ns1.example.com. NS ns2.example.com. 2345 IN NS ns99.example2.com. NS ns100.example2.com. 4321 IN NS ns1.cust1.com. NS ns2.cust1.com. Where am i going wrong ? My request seems simple to me atleast. To put it in terms of firewalling i want to redirect traffic client queries 2001:1973:2303:4321::1 - ns3.example.ca sees the request and redirects the query to ns1.cust1.com - ns1.cust1.com answers the query with omg.itworks.ca ( provided ns1.cust1.com is properly configured.

    Read the article

  • Trouble with nginx and serving from multiple directories under the same domain

    - by Phase
    I have nginx setup to serve from /usr/share/nginx/html, and it does this fine. I also want to add it to serve from /home/user/public_html/map on the same domain. So: my.domain.com would get you the files in /usr/share/nginx/html my.domain.com/map would get you the files in /home/user/public_html/map With the below configuration (/etc/nginx/nginx.conf) it appears to be going to my.domain.com/map/map as noticed by this: 2011/03/12 09:50:26 [error] 2626#0: *254 "/home/user/public_html/map/map/index.html" is forbidden (13: Permission denied), client: <edited ip address>, server: _, request: "GET /map/ HTTP/1.1", host: "<edited>" I've tried a few things but I'm still not able to get it to cooperate, so any help would be greatly appreciated. ####################################################################### # # This is the main Nginx configuration file. # ####################################################################### #---------------------------------------------------------------------- # Main Module - directives that cover basic functionality #---------------------------------------------------------------------- user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; #---------------------------------------------------------------------- # Events Module #---------------------------------------------------------------------- events { worker_connections 1024; } #---------------------------------------------------------------------- # HTTP Core Module #---------------------------------------------------------------------- http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; server { listen 80; server_name _; #access_log logs/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } location /map { root /home/user/public_html/map; index index.html index.htm; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } include /etc/nginx/conf.d/*.conf; }

    Read the article

  • How do I increase the buffer size for domain sockets in OS X 10.6

    - by Chas. Owens
    In Linux I have no problem dumping tons of data into a domain socket, but the same code on OS X 10.6.2 blows up after about 65 records. The socket reader code looks like #!/usr/bin/perl use strict; use warnings; use IO::Socket; unlink "foo"; my $sock = IO::Socket::UNIX->new ( Local => 'foo', Type => SOCK_DGRAM, Timeout => 600, ) or die "Could not create socket: $!\n"; while (<$sock>) { chomp; print "[$_]\n"; } And the client code looks like #!/usr/bin/perl use strict; use warnings; use IO::Socket; my $sock = IO::Socket::UNIX->new ( Peer => 'foo', Type => SOCK_DGRAM, Timeout => 600, ) or die "Could not create socket: $!\n"; for my $i (1 .. 1_000_000) { print $sock "$i\n" or die $!; } close $sock; The error message I get is No buffer space available at write.pl line 15.. It seems fairly obvious that there is a difference in the buffer size between Linux and OS X, but I don't know how to set it OS X (or what the possible negative side effects might be).

    Read the article

  • Nginx proxy to IIS Connection Timeout

    - by MitMaro
    I am having an issue with random timeouts with a Nginx proxy connecting to an IIS machine. I have been watching a packet capture between the two servers and it seems that the IIS machine is receiving a SYN packet but is not responding with what I think should be an ACK response. Before the timeout occurs there seems to be a slower response from the IIS server. There is no unusual memory or processor usage on the IIS or Nginx machine. Some information on the servers and setup: Nginx Machine: Ubuntu 10.04 64bit Nginx 0.7.65 Amazon EC2 Windows Machine: Windows Server 2008 IIS 7 ASP.net Application in Integrated Mode Nginx Error: 2011/01/10 17:57:40 [error] 8297#0: *30 connect() failed (110: Connection timed out) while connecting to upstream, client: 209.***.***.***, server: secure.example.com, request: "GET /a/path/deliver.aspx HTTP/1.1", upstream: "http://***.***.***.****:****//another/path/deliver.aspx", host: "secure.example.com" WireShark Packets 6521.449528 10.***.***.*** -> 174.***.***.*** TCP 38695 > us-cli [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=477422103 TSER=0 WS=7 6524.443239 10.***.***.*** -> 174.***.***.*** TCP 38695 > us-cli [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=477422403 TSER=0 WS=7 6530.443241 10.***.***.*** -> 174.***.***.*** TCP 38695 > us-cli [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=477423003 TSER=0 WS=7

    Read the article

  • Sane patch schedule for Windows 2003 cluster

    - by sixlettervariables
    We've got a cluster of 75 Win2k3 nodes at work in a coarse grained compute cluster. The cluster is behind a mountain of firewalls and resides in its own VLAN. Jobs of all sizes and types run on the cluster and all of the executables running are custom-made. (ed: additional notes on our executables) The jobs range from 30 seconds to 7 days in duration, and may contain one executable or 2000 sub-jobs (of short duration). Obviously we are trying to avoid the situation where our IT schedules a reboot during a 7 day production job. We have scheduling software which accomodates all of the normal tasks for a coarse grained cluster and we can control which machines are active for submission, etc. If WSUS was in some way scriptable (or the client could state it's availability for shutdown) we could coordinate the two systems and help out. Currently, the patch schedule is the Sunday after Super Tuesday regardless of what is running on the cluster. We have to ask for an exemption every time we want to delay patching a machine for a long running production job. Basically, while our group is responsible for the machines we have little control over IT's patch schedule. Is patching monthly with MS's schedule sane for a production Windows cluster? Are there software hooks in WSUS where we could say, "please don't reboot just yet"?

    Read the article

  • How to implement a virtual server running Ubuntu inside a fileserver in windows?

    - by user541445
    I work in a company that has some limitations regarding their budget. They need client/server aplication. I can code the aplication, I've made mini tests on primordial applications that work. The thing is that they only have fileservers, and the application they need must be concurrent compliant, so the database must be in their local network (Fileserver is the only choice). So far, I have explored almost any option available, starting with: Desktop databases. Access (We have a license)(But concurreny is just not effective, besides it's a windows software, yuck). Sqlite (Nice, but since the information they manage is a lot, I've performed some concurrency tests with INSERTs and doing SELECTS at the same time). It fails, somehow it just stops inserting. Open office Base (I dismember a base office only to see that it was a file mode HSQLDB). I've tried, not concurrent at all. Etc (You name the Open source Desktop Database manager, and yes, I've tried that one.) Server databases Call me a stubborn person, but reading some server databases documentation say that their databases will work in a file mode. I've tried a lot of products. Postgres, MySQL, FireBird, H2, Derby, Oracle Express, IBM DB2 Express, etc. So, I really need a hand here, I've been doing this install/delete/depression thing with a lot of databases for 3 weeks, until I got with a crazy idea and I just came here to express it. So, my question comes down to this: Installing a light virtual OS software like Virtual PC and then installing in there a Server OS like an Ubuntu server inside that virtual software will work? Will it work 24/7 or when I close the virtual pc software? Will this work in a fileserver? Any suggestion, answer, critic to the place I work, crazy new concurrent database that will work in fileserver will be most than welcome.

    Read the article

  • Port knocking via SSH tunnels

    - by j0ker
    I have a server running in my university's internal network. There is only one SSH daemon running which is secured by port knocking with knockd. Works fine if I try to connect from within the internal network. But since the server has no external IP, I have to tunnel into the internal network every time I want to access the server from outside. And since tunneling only works for a single port I cannot do the port knocking as easily as from an internal client. In fact, I don't get it to work at all. What I'm trying is opening tunnels for all the different ports that have to be knocked. Then I send TCP-SYN packets into the tunnels. But that doesn't work even for a single port. If I establish the tunnel on the first port in the knock sequence and send a packet through it, it doesn't reach the server. There is no entry in the log file of knockd, while there should be something like 123.45.67.89: openSSH: Stage 1 (as shown with internal knocks). So I guess, the problem doesn't exist within my knocking script but is a more general one. Are there any known problems with what I'm trying to do? Is it even possible or am I missing something? Thanks in advance!

    Read the article

  • IPC between multiple processes on multiple servers

    - by z8000
    Let's say you have 2 servers each with 8 CPU cores each. The servers each run 8 network services that each host an arbitrary number of long-lived TCP/IP client connections. Clients send messages to the services. The services do something based on the messages, and potentially notify N1 of the clients of state changes. Sure, it sounds like a botnet but it isn't. Consider how IRC works with c2s and s2s connections and s2s message relaying. The servers are in the same data center. The servers can communicate over a private VLAN @1GigE. Messages are < 1KB in size. How would you coordinate which services on which host should receive and relay messages to connected clients for state change messages? There's an infinite number of ways to solve this problem efficiently. AMQP (RabbitMQ, ZeroMQ, etc.) Spread Toolkit N^2 connections between allservices (bad) Heck, even run IRC! ... I'm looking for a solution that: perhaps exploits the fact that there's only a small closed cluster is easy to admin scales well is "dumb" (no weird edge cases) What are your experiences? What do you recommend? Thanks!

    Read the article

  • Apache proxy: Why is one vhost returning Forbidden while the other one works?

    - by Stefan Majewsky
    I have a Java application that needs to talk to another intranet website using HTTPS in both directions. After fighting with Java's SSL implementations for some time, I gave up on that, and have now set up an Apache that's supposed to act as a bidirectional reverse proxy: external app ---(HTTPS request)---> Apache ---(local HTTP request)---> Java app This direction works just fine, however the other direction does not: Java app ---(local HTTP request)---> Apache ---(HTTPS request)---> external app This is the configuration for the vhost implementing the second proxy: Listen 127.0.0.1:8081 <VirtualHost appgateway:8081> ServerName appgateway.local SSLProxyEngine on ProxyPass / https://externalapp.corp:443/ ProxyPassReverse / https://externalapp.corp:443/ ProxyRequests Off AllowEncodedSlashes On # we do not need to apply any more restrictions here, because we listened on # local connections only in the first place (see the Listen directive above) <Proxy https://externalapp.corp:443/*> Order deny,allow Allow from all </Proxy> </VirtualHost> A curl http://127.0.0.1:8081/ should serve the equivalent of https://externalapp.corp, but instead results in 403 Forbidden, with the following message in the Apache error log: [Wed Jun 04 08:57:19 2014] [error] [client 127.0.0.1] Directory index forbidden by Options directive: /srv/www/htdocs/ This message completely puzzles me: Yes, I have not set up any permissions on the DocumentRoot of this vhost, but everything works fine for the other proxy direction where I haven't. For reference, here's the other vhost: Listen this_vm_hostname:443 <VirtualHost javaapp:443> ServerName javaapp.corp SSLEngine on SSLProxyEngine on # not shown: SSLCipherSuite, SSLCertificateFile, SSLCertificateKeyFile SSLOptions +StdEnvVars ProxyPass / http://localhost:8080/ ProxyPassReverse / http://localhost:8080/ ProxyRequests Off AllowEncodedSlashes On # Local reverse proxy authorization override <Proxy http://localhost:8080/*> Order deny,allow Allow from all </Proxy> </VirtualHost>

    Read the article

  • Network access lags for Win7 when server network utilization is high

    - by Jeff Miles
    We have a Dell PE2950 file server running Windows 2008, hosting a DFS namespace of ~1.2 TB. This server has two Broadcom 1Gbps NICs teamed together. When there is high traffic going to the server across the network (greater than 200 Mbps), any Windows 7 client accessing a DFS share at the time experiences severe performance problems. For example: Computer A has an AutoCAD drawing opened directly from the DFS share. Performance is normal, not causing any issues. Computer B begins a file transfer, putting a 11GB file onto a different DFS namespace, on the same server Computer A immediately notices lag while using AutoCAD. The cursor momentarily freezes within AutoCAD every 10 seconds or so, and any browsing of the DFS share is extremely slow. Computer B completes file transfer, and performance resumes to normal for Computer A. This is only affecting Windows 7 clients, using a variety of hardware (desktop + laptop). All of our Windows XP clients see no performance impact during the file transfer. Things I have tried with no change: Had Computer A work from an entirely different RAID array from the file transfer destination Updated NIC drivers on clients and server Enabled TCP offload and receive side scaling on the server NIC (previously disabled when the issue began) Antivirus disabled during file transfer I am currently having a user test applications other than AutoCAD when the file transfer occurs, and will update the question with that result. Does anyone have any recommendations for resolution or additional troubleshooting steps?

    Read the article

  • 500 Error when logining into subdomain using codeigniter

    - by itsdanprice
    I have a website that has been setup and working fine for ages. It's built using Code Igniter. It's run using .htaccess files to restrict access and hide urls. All fine. Until a couple of days ago when we try to access http://admin.dealersupport.co.uk we get a 500 error (this is the back end of the site, held in a seperate subdomain.) Nothing else has changed on the server. I have tried restoring from a back up from when I know it was working. The problem persists. The only thing I can think of is that we recently upgraded to Plesk 11.0.9 and since then we have been seeing some Apache instabilities. The only thing that is thrown up by the error logs is this: Wed Nov 21 08:40:17 2012] [error] [client 94.31.24.129] Options FollowSymLinks or SymLinksIfOwnerMatch is off which implies that RewriteRule directive is forbidden: /var/www/vhosts /dealersupport.co.uk/admin/index.pl, referer: http://admin.dealersupport.co.uk/login I have now added this to my .htaccess files Options +FollowSymLinks +SymLinksIfOwnerMatch RewriteEngine On And that seems to have eliminated that error from the error logs, but we are still getting a 500 error when we have logged into the backend. Can anyone help?

    Read the article

  • Incorrect durations mp4 file created by ffmpeg (avconv)

    - by Ruslan Sharipov
    Example usage: avconv -i rtmp://maps.lo.ufanet.ru/live/10e227922b473e91f37474fa084107af -vcodec copy -an -sn -map 0 -f segment -segment_format mp4 -segment_time 60 -y %05d.mp4 avconv version 0.8.3-6:0.8.3-1+b1, Copyright (c) 2000-2012 the Libav developers built on Jun 15 2012 13:54:35 with gcc 4.7.0 HandShake: client signature does not match! Metadata: height 480.00 remote_addr: sdp_session {sdp_session,0, {sdp_o,"-","1289703354974145","1289703354974145",inet4, "10.1.12.99"}, "Media Presentation", {inet4,"0.0.0.0"}, {0,0}, [{"control","*"},{"range","npt=0.0 start 30400239.52 timeshift_duration 319250.58 timeshift_size 120000.00 width 640.00 [flv @ 0x1d36a40] Estimating duration from bitrate, this may be inaccurate Input #0, flv, from 'rtmp://maps.lo.ufanet.ru/live/10e227922b473e91f37474fa084107af': Duration: N/A, start: 0.000000, bitrate: N/A Stream #0.0: Video: h264 (Baseline), yuvj420p, 640x480 [PAR 1:1 DAR 4:3], 1k tbr, 1k tbn, 2k tbc Output #0, segment, to '%05d.mp4': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: libx264, yuvj420p, 640x480 [PAR 1:1 DAR 4:3], q=2-31, 1k tbn, 1k tbc Stream mapping: Stream #0:0 -> #0:0 (copy) Press ctrl-c to stop encoding ^Cframe= 9566 fps= 36 q=-1.0 Lsize= -0kB time=318.25 bitrate= -0.0kbits/s video:30348kB audio:0kB global headers:0kB muxing overhead -100.000071% Received signal 2: terminating. Result: serafim@yard:~/video2$ ls 00000.mp4 00001.mp4 00002.mp4 00003.mp4 00004.mp4 00005.mp4 Now try to play the files in the player, such as VLC. And that's what we get: the first fragment (00000.mp4) played well, no problems, but the second (00001.mp4 and beyond) starts the bug manifests itself, namely the file 00001.mp4 first 60 seconds black screen, but since 61 seconds starts playing the video. Attachments: https://dl.dropbox.com/u/760901/rtmp_and_mp4.zip How to get rid of the delay with black screen at the beginning of the segments? Maybe ffmpeg to pass parameters, or third-party software is able to correct the obtained segments mp4?

    Read the article

  • Nginx wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

  • Changing a set-cookie header using mod_rewrite/mod_proxy

    - by olrehm
    I have a bunch of cgi scripts, which are served using HTTPS. They can only be reached on the intranet, not from the outside. They set a cookie with the attribute 'Secure', so that it can only be send via HTTPS. There is also a reverse proxy to one of these scripts, unfortunately using plain HTTP. When a response comes in from my cgi-script with a secure cookie, it is not being passed on via HTTP (after all, that is what that attribute is for). I need however, an exception to this rule. Is it possible to use mod_rewrite/mod_proxy or something similar, to change the set-cookie header in the response coming from my cgi script and remove the Secure, such that the cookie can be passed back to the user using the unsafe HTTP connection? I understand that this defeats the purpose of the Secure in the first place, but I need this as a temporary work around. I have searched the web and found how to add a set-cookie header using mod_rewrite, and I have also found how to retrieve the value of a cookie coming from the client in a cookie header. What I have not yet found is how to extract the set-cookie header received in the response of a script I am proxying for. Is that possible? How would I do that? Ole

    Read the article

  • What to look for in a switch with LAN/WAN verses an iSCSI SAN?

    - by Luke
    I'm setting up a VMWare ESXi 5 environment with 3 server nodes. Dell recommended 2x Force10 S60 switches shared (iSCSI SAN, LAN/WAN). The S60 switches are extremely powerful. They have 1.25 GB of buffer cache, < 9us latency. But they are very expensive (online price ~$15k per switch, actual quote a little less). I've been told that "by the book" you should at least have 2 internal switches for SAN, and 2 switches for LAN/WAN (each with a redundant). I know some of the pros and cons of each approach. What I'm wondering is, would it be more cost effective to disjoin the SAN from LAN with less expensive switches? The answer to this question highlights what I should be looking for in a switch for the SAN. What should I be looking for in a LAN/WAN switch, in comparison to the SAN? With the above linked question for the SAN: How is buffer latency measured? When you see 36 MB of buffer cache, is that shared or per port? So 36 MB would be 768kb or 36MB per port? With 3 to 6 servers how much buffer cache do you really need? What else should I be looking at? Our application will be heavily using HTML5 websockets (high number of persistent connections). The amount of data being sent is small; Data sent between client <- server isn't broadcasted (not a chat/IM service). We will be doing some database reporting too (csv export, sums, some joins). We are a small business and on a budget. We'd probably only be able to spend no more than $20k on switches total (2 or 4).

    Read the article

  • How to create an alias for a named SQL Server instance

    - by Svish
    On my developer computer I have an SQL Server instance named *developer_2005*. In the resource setting files of a C# application we are creating, the instance name is set to foobar (not really, but just as an example). So when I run the application (in debug or realease) it tries to connect to an SQL Server on localhost, named foobar. I am wondering if it is possible to create an alias or something like that, so that the application actually finds an SQL Server on localhost named foobar, but it is actually connecting to the instance named *developer_2005*. The connection string in the config file of the application is Data Source=localhost\foobar;Initial Catalog=barfoo;Integrated Security=True with provider name System.Data.SqlClient. If I change localhost\foobar to *localhost\developer_2005* then the application can connect like it should. How can I create an alias so that I won't have to change the string in the file? I tried, in SQL Server Management Studio, to create a Server Registration with registered server name "localhost\developer", but this didn't seem to do any good. Not even sure what that really did... But then I discovered SQL Server Configuration Manager\SQL Native Client COnfiguration\Aliases. And I kind of assume this is where the solution lies. But I can't quite figure out how to add a new one... When creating a new one, I have to provide Alias Name, Port No, Protocol and Server, and I don't really have a clue what to put in either of them.

    Read the article

  • Does MySQL have some kind of DoS protection or per-user query limit?

    - by Ghostrider
    I'm a bit at a loss. I'm running a MySQL database that's roughly 1GB data in indices combined on a dedicated Linux server. DB version is '5.0.89-community'. Configuration is controlled via cPanel. PHP actually runs elsewhere on a shared hosting. IP addresses are static and don't change. Access from remote IP address is properly configured. Website gets around 10K hits per day with each hit generating a a database query. Some of these queries are expensive (~1 sec execution time). All is fine and well until at some point DB server starts refusing connections from the client, claiming that specific user can't access the server from that IP. Resetting the server will always fix the problem for a day or two and then the same thing happens. There are some other DBs on that server, some of which are hit pretty hard on occasion but constantnly. One of the apps maintains several persistent connections since it does couple of updates per minute. Though I don't think it's related. What's driving me mad is that I can't figure out why server would start refusing connections. There is nothing in the logs. This server is a hosted dedicated server so hosting company created the OS image and I didn't write or go over every line of configuration. I'd do it but I'm at a loss as to where start looking. Any advice is appreciated.

    Read the article

  • How to Set up MySQL Server to utilize more memory

    - by Cyril Gupta
    Hi there, I have MySQL setup on Windows along with Plesk. The version is 5.0.45 Community. The databases I have on the server are MyISAM as well as InnoDb, but predominantly innodb. I had 8G memory on my server, but MySQL isn't going up more than 1.3G and tweaking the settings isn't helping. I tried to increase the memory allocation for innodb_buffer_pool_size, it works if I set it up to 1G, but if I set 2G, or above the server doesn't come back online! I want mySQL to use at least 5-6 Gigs of the memory I have for performance, but I can't get this to work. Can anyone please help? My mysql config file is below (there are 2 mysqld sections... when i used MySQL workbench it created another one!) [MySQLD] port=3306 basedir=C:\\Program Files (x86)\\Parallels\\Plesk\\Databases\\MySQL datadir=C:\\Program Files (x86)\\Parallels\\Plesk\\Databases\\MySQL\\Data default-character-set=latin1 default-storage-engine=INNODB query_cache_size=128M table_cache=1024 tmp_table_size=32M thread_cache=32 myisam_max_sort_file_size=100G myisam_max_extra_sort_file_size=100G myisam_sort_buffer_size=2M key_buffer_size=32M read_buffer_size=16M read_rnd_buffer_size=2M sort_buffer_size=8M innodb_additional_mem_pool_size=24M innodb_flush_log_at_trx_commit=1 innodb_log_buffer_size=10M innodb_buffer_pool_size=1G innodb_log_file_size=10M innodb_thread_concurrency=8 max_connections=700 key_buffer=48M max_allowed_packet=5M sort_buffer=2M net_buffer_length=4K old_passwords=1 wait_timeout=20 connect_timeout=60 [client] port=3306 [mysqld] query_cache_min_res_unit = 4096 innodb_additional_mem_pool_size = 1048576 innodb_buffer_pool_size = 1G query_cache_limit = 1048576 key_buffer_size = 8388608 sort_buffer_size = 2097144 query_cache_type = 1 query_cache_size = 312M log-slow-queries connect_timeout = 5 wait_timeout = 20 thread_cache_size = 15 read_buffer_size = 131072 table_cache = 64

    Read the article

  • Index a low-cost NAS on Windows 7

    - by JcMaco
    Has anyone found a way to index the files stored on a Networked Attached Storage on Windows 7 so that the files can be available in Windows Search and Libraries? I am referring to the cheap and available NAS like the Western Digital My Book series that use an embedded linux server. Similar question: http://windows7forums.com/windows-7-networking/6700-indexing-nas-drive-libraries.html EDIT Windows help proposes to make the files stored on the NAS available offline. This is obviously not a good solution if the NAS has more data than what the client can store. If the folder is on a network device that is not part of your homegroup, it can be included as long as the content of the folder is indexed. If the folder is already indexed on the device where it is stored, you should be able to include it directly in the library. If the network folder is not indexed, an easy way to index it is to make the folder available offline. This will create offline versions of the files in the folder, and add these files to the index on your computer. Once you make a folder available offline, you can include it in a library. When you make a network folder available offline, copies of all the files in that folder will be stored on your computer's hard disk. Take this into consideration if the network folder contains a large number of files.

    Read the article

  • Weirdly high ping on direct ethernet connection?

    - by Antriel
    I bought new Lenovo IdeaCentre H430 pc and I'm having problem with high pings. Windows 7 with on-board realtek NIC. Fresh install, fully updated, drivers installed from included CD. When I start pinging router (direct 1Gb ethernet connection, 1 hop), pings start at <1ms (which is fine) and after a while they jump to 300-1000ms. I loaded up live ubuntu to test if the problem might be in HW. It's not, in ubuntu pings were always <1ms. I also noticed that when I start using connection somehow, pings go down to 1ms, but go back up when I stop using it (tested by accessing live camera feed on LAN). Power Options set to max performance. I disabled Interrupt Moderation on the NIC, didn't help. I tested it in the safe mode with networking, same problem there. It slows down our client-server based programs and I have no idea what's causing it. All I could google up was that disabling Interrupt Moderation would help, it didn't though. Anyone had similar problems? tl;nr: Computer is giving high pings to router when idle and normal pings when network is under load, it slows down our software significantly.

    Read the article

  • Trouble printing to local printer when connected to VPN with split-tunneling enabled

    - by Marve
    I'm a volunteer network admin for a multi-tenant non-profit office space. One of our new tenants uses a VPN to connect to remote resources using RRAS and Small Business Server 2008. They also have a local network printer for the workstations in our office. When connected to the VPN, they cannot print to the local printer. I informed their network admin that they need to enable split-tunneling to fix this. Their network admin enabled split-tunneling, but apparently printing still didn't work. He told me that I need to open port 1723 on our office firewall to allow it to work. I'm just a novice administrator and not familiar with RRAS, but this doesn't sound right to me and I haven't been able to find anything on the web to validate it. Additionally, my understanding of split-tunneling is that it is handled entirely by the VPN client and should work irrespective of firewall settings. Is my understanding of the situation incorrect? What steps should I take to resolve this problem?

    Read the article

< Previous Page | 785 786 787 788 789 790 791 792 793 794 795 796  | Next Page >