Search Results

Search found 5845 results on 234 pages for 'commit protocol'.

Page 123/234 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Windows 7 wifi reports "no network access" and "no internet access" but connects in fedora

    - by rick2047
    I am running windows 7 home basic (64bit) on a Acer 5742G laptop with Atheos wifi adapter in it. Yesterday, I hiberneted my computer as I always do and up untill then the wifi was working fine. When I booted my computer up again today I started having a strange problem: It detects my wifi but after connecting to it, it keeps on oscillating between states of no network access and no internet access. I can't connect to anything (the internet or my router). I tried to reset my internet protocol stack using this fixit file. I also tried to uninstall and reinstall my network driver. Neither helped. I am using the same laptop's fedora installation right now and the wifi is working perfectly fine. Please help. Edit To add additional details, I have Microsoft Security essentials as my antivirus software and I haven't messed with the firewall or the router configurations.

    Read the article

  • Alternatives to Crashplan for VPS?

    - by Chloe
    I use SFTP Net Drive to mount a remote VPS so I can back it up. However, it's taken over 3+ days to scan! I ran 'ls -lR' from my desktop over the mounted network drive and it only took about 5m to list all the files! There are only about 5000 files and 2 GB. I know Crashplan can run headless on the VPS itself, but that sounds like a pain to set up, and it takes so much memory on the server. The VPS doesn't have a lot of memory to spare - it's less than my desktop. Is there another program that can communicate with a Crashplan backup protocol and has a command line interface? backup /home

    Read the article

  • Transparently cache files from a network drive in Linux

    - by Vadim
    We have a Linux server that reads files from a network drive and processes them. In a common scenario, a user will log in and access the same files over and over again. The size of the files varies but the larger ones can be around 50+ Mb. The files seldom change. I was wondering if it's somehow possible to transparently cache the files. I don't want (or can) change the program the reads the files, nor do I control the protocol by which the files are accessed. I just want something to detect that I access a certain path, copy the file locally (if needed) and then read the file from the local drive. I've read about Bcache but can't figure out if it's what I need. Do you have any suggestions? Thanks, Vadim.

    Read the article

  • firefox, opera 'The connection was reset' on few POST method calls on Windows and Ubuntu

    - by Gopalakrishnan Subramani
    my website works well with GET method, also few POST methods. Some pages with POST method doesn't work. Some pages with POST work. For example, login page uses POST that works fine. When I post the data on webpage, firefox says "Connecting..." and finally report connection timed out error. The same behavior happens with Opera as well. However Google Chrome works fine. At the server side, I use nginx 1.2.4 with HTTPS and uwsgi for python (flask framework) app. I use geotrust certificate. The same behavior happens with Windows 7 and Ubuntu 12.04 on firefox. I tried firefox in safemode, but no luck. Set auto-detect proxy settings. no luck. Cleared all cookies. no luck Anyone help me to fix this issue? I am posting ngix config. shame on me. I use root, I know which is not advised. need to fix soon. user root; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; server { listen 80; server_name www.example.com; rewrite ^(.*) https://example.com$1 permanent; } server { listen 80; server_name example.com; rewrite ^ https://$server_name$request_uri? permanent; } server { listen 443; server_name example.com; keepalive_timeout 70; ssl on; ssl_certificate /root/cc.cert; ssl_certificate_key /root/cc.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; #ssl_ciphers HIGH:!aNULL:!MD5; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { try_files $uri @app; } location @app { include uwsgi_params; uwsgi_pass unix:/tmp/uwsgi.sock; } } } #mail { # # See sample authentication script at: # # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; # } # # server { # listen localhost:143; # protocol imap; # proxy on; # } #}

    Read the article

  • How does pptpd (poptop) or pppd work with eap-tls and mppe-128?

    - by Henk
    To create a VPN I've installed pptpd on an Ubuntu domU (Debian domUs can also be created). MSCHAPv2 isn't a very strong authentication protocol so I'd like to use EAP-TLS. I've set up a FreeRADIUS server and certificates for EAP-TLS before (for use with WPA), and I've also set up a pptp server with mschap-v2 auth, but I can't figure out how to combine the two. Maybe pppd can use EAP-TLS on its own, but I can't find support for it in the Ubuntu package. If I need to patch the package, that's fine, I know how to patch Debian packages (provided the patch applies cleanly). Also, can MPPE still be used when pppd is configured to use EAP? Because it says in the manual several times that MPPE requires MSCHAP. However, other docs like this one: http://www.nikhef.nl/~janjust/ppp/ seem to refute that. The clients are running Mac OS X Leopard and GNU/Linux, there's no need to fix anything for Windows.

    Read the article

  • An equivalent of IceCast but for Live Video Streaming ?

    - by Kedare
    Hello, I am looking for a solution to Stream live video like that : A camera/webcam/video output ---> Stream server ---> Clients And if possible multiple Stream Servers like this (like IceCast): A camera/webcam/video output --> Master Stream server +---> Slave Stream Server ---> Clients | `--> Clients | `--> Slave Stream Server ---> Clients `--> Clients The clients will be in flash, so I think RTMP should be a good protocol, I've heard of Red5, is it good for that ? Does it scale ? I would like to get statistics (Amount of clients, Bandwidth, etc), is it possible with red5 ? Do you know any other good solution to do that ? (Only free and if possible Open Source) Thank you !

    Read the article

  • Force a browser to load the 'https' edition of a website, not the 'http'?

    - by warren
    This is similar to this previous question, but I believe it's a bit different*. Sites like GMail support a preference that pushes all traffic through the SSL edition of the site rather than the plain-text protocol. For sites that don't offer such preferences (or ones that may, but I have been unable to find, like facebook), is there a way using only the browser (perhaps with a plugin or addon) to always try SSL first, and fall-back to plain-text iff SSL fails? Is that solution available on Windows, Mac OS X, and Linux? Just one? * The previous question was looking for external applications that would accomplish this goal.

    Read the article

  • ASP.NET directories blocked from VisualSVN Server behind reverse proxy in IIS 6

    - by user143344
    I’ve got VisualSVN Server running behind a reverse proxy in IIS 6, Windows Server 2003. This isn’t ideal, but for the main web app on the server I’ve only got one IP address and SSL certificate available. Everything works except for when trying to commit to or browse the default ASP.NET directories (App_Browsers, App_Code, App_Data). SVN commits fail for these directories – which I believe is because IIS will never serve them by default. The reverse proxy uses a virtual directory in IIS – is there a change I can make in the web.config for this virtual directory to get around the issue?

    Read the article

  • Simulating network latency for localhost connection on Windows 7

    - by nitro2k01
    I need to simulate network latency to a program running on the local computer, connecting to a local service. Thus far I have tried dummynet (a windows build of ipfw) which I got working after some trial and error. While it generally works, I can't seem to get it to filter localhost traffic. Even after adding a rule from any to any which affects external traffic, this makes no difference for local connections. I would appreciate if anyone knows how to simulate local latency using dummynet or a different tool. The tool should be able to simulate latency generically in IP packets, (TCP and UDP) and not be protocol specific.

    Read the article

  • Apt Stalls When Using HTTP Sources

    - by UltraNurd
    I was getting some to me inexplicable behavior from apt-get/aptitude on an admittedly crusty old webserver. While it was otherwise running fine, as soon as I tried a package upgrade, after a downloading a few updates it would stall completely, then my SSH session hung (and I was unable to reconnect), thus requiring a hard restart. First, I switched to a different package source in /etc/apt/sources.list, but still got the same behavior. At this point I was assuming the NIC was dying in some weird way... but as soon as I changed the package source to use FTP instead of HTTP, everything worked fine, and I was able to upgrade. For now I'm not too concerned since I have an easy work around, but it implies that there's something very weird with my network setup, since it seems to be protocol (or port?) specific. I didn't think any of my NAT setup would affect outbound traffic, but I could be crazy. Any ideas what I should try to look for?

    Read the article

  • xf86OpenConsole: Cannot open /dev/tty0 (No such file or directory)

    - by mazgalici
    root@mazgalici:~# startx X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-28-server i686 Ubuntu Current Operating System: Linux mazgalici 2.6.18-194.26.1.el5.028stab079.2PAE #1 SMP Fri Dec 17 19:34:22 MSK 2010 i686 Kernel command line: quiet Build Date: 10 November 2010 11:25:26AM xorg-server 2:1.7.6-2ubuntu7.4 (For technical support please see ) Current version of pixman: 0.16.4 Before reporting problems, check to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Jan 11 01:28:48 2011 (==) Using config directory: "/usr/lib/X11/xorg.conf.d" Fatal server error: xf86OpenConsole: Cannot open /dev/tty0 (No such file or directory) Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log

    Read the article

  • How do I set my router up to authenticate with my service provider?

    - by João Lourenço
    Currently I have this setup: D-Link DSL2640U connected to the phone line: VPI/VCI 0/40, Service Category UBR Without PCR Service pppoe_0_40_3 Interface nas_0_40 Protocol Bridging (LLC/SNAP-Bridging) 802.11q not enabled Bridge Service Enabled Computers have individual PPPoE connections (in Network and Sharing Centre - Set up a new connection or network - Connect to the Internet - Broadband (PPPoE) - Username and Password) How would I move these settings on each individual PC to the router so that all I need to do is connect to the wireless? I have been unsuccessful setting up anything other than Windows PCs on the network (I have tried Macs, iOS and Nokia SmartPhones too). Thanks for the responses in advance!

    Read the article

  • tcpdump dns output codes

    - by tim
    Captured on the nameserver: 21:54:35.391126 IP resolver.7538 > server.domain: 57385% [1au] A? www.domain.de. (42) What das the percent sign in 57385% mean? As far as I can see 57385 is the clients sequence number, a plus would mean RD bit set. Second question: what does the ARCOUNT do in the query? As I understand the tcpdump man page the [1au] means tcpdump treats this as a protocol anomalie - as would I. I see this in a lot of queries.

    Read the article

  • linux + automated rsync command

    - by Diana
    my target is to copy /tmp/my_file from 10.10.10.1 to my Linux machine without login and password , I set the passwords file with the right password - secret123 so rsync should work , please advice why I get Permission denied. Remark - 10.10.10.1 address is linux machine version – red hat 5.3 rsync -WavH --password-file=/tmp/passwords --progress [email protected]:/tmp/my_file . Permission denied. rsync: connection unexpectedly closed (0 bytes read so far) rsync error: error in rsync protocol data stream (code 12) at io.c(165) more /tmp/passwords secret123 ls -ltr passwords -rwxr-xr-x 1 root root 10 Sep 12 17:32 passwords

    Read the article

  • Issue with Visual C++ 2010 (Express) External Tools command

    - by espais
    Hi all, Normally we develop in VS 2005 Pro, but I wanted to give VS 2010 a spin. We have custom build tools based off of GNU make tools that are called when creating an executable. This is the error that I see whenever I call my external tool: ...\gnu\make.exe): *** couldn't commit memory for cygwin heap, Win32 error 487 The caveat is that it still works perfectly fine in VS2005, as well as being called straight from the command line. Also, my external tool is setup exactly the same as in VS 2005. Is there some setting somewhere that could cause this error to be thrown?

    Read the article

  • seamless mode remote desktop connection from linux to windows

    - by mateusz.fiolka
    I am a programmer using Linux as my main OS, however sometimes I need to use windows (ie, office, ea). I'm running qemu with kvm to access the windows "machine". I would like to achieve something that is described here: https://help.ubuntu.com/community/SeamlessVirtualization (It means I would be able to run chosen applications on Windows and then display them locally on Linux as separate windows achieving good desktop integration). However seamless rdp is buggy and doesn't work on my machine (probably because I'm using a tiling window manager and a 64bit system). Are you aware of any other solution then rdp seamless mode? I would prefer to still use quemu because it uses cpu hardware virtualization, so different protocol/client combination would be preferable.

    Read the article

  • When can an FTP server close its passive connections?

    - by Don Kirkby
    Does the FTP protocol allow the server to close any of its passive connections while the client is still connected? Can it tell when the client is finished receiving and then close the connection? I'm including an FTP server in my application using the pyftpdlib Python project. I've got it to work in active and passive mode, but I'm a bit concerned about when it closes its passive connections. I've tried connecting to it with both FileZilla and the default ftp command in Ubuntu, and in both cases, I get a new passive port for every request. That is, if I sit in the root folder and type ls 10 times, I use up 10 ports. This means that I have to allocate a big block of passive ports for the FTP server to use so it won't run out. As soon as the client disconnects, the server releases all the passive connections associated with that client and those ports can be reused. However, a long-running connection could use up a lot of ports.

    Read the article

  • WebSVN accept untrusted HTTPS certificate

    - by Laurent
    I am using websvn with a remote repository. This repository uses https protocol. After having configured websvn I get on the websvn webpage: svn --non-interactive --config-dir /tmp list --xml --username '***' --password '***' 'https://scm.gforge.....' OPTIONS of 'https://scm.gforge.....': Server certificate verification failed: issuer is not trusted I don't know how to indicate to websvn to execute svn command in order to accept and to store the certificate. Does someone knows how to do it? UPDATE: It works! In order to have something which is well organized I have updated the WebSVN config file to relocate the subversion config directory to /etc/subversion which is the default path for debian: $config->setSvnConfigDir('/etc/subversion'); In /etc/subversion/servers I have created a group and associated the certificate to trust: [groups] my_repo = my.repo.url.to.trust [global] ssl-trust-default-ca = true store-plaintext-passwords = no [my_repo] ssl-authority-files = /etc/apache2/ssl/my.repo.url.to.trust.crt

    Read the article

  • Inaccurate bandwidth limiting in altq queues

    - by overkordbaever
    I'm setting up an environment where I have one Linux server, one OpenBSD router and one Linux client and I want to be able to limit how much bandwidth the client should be able to use. I've been performing these tests with "netcat" and "time" (using time to measure the time of the transfer with netcat), and what happens when trying these tests (using the TCP protocol, the queues will for some reason not work with UDP) is that the queues aren't exact at all. For example: when setting a bandwidth limit of 10mbit, the client cannot use more than five mbits, when setting a limit of 100mbit, the client cannot use more than around 50mbit. The config looks like (using a 100mbit limit in the example): #queue rules altq on { $int_if, $ext_if } cbq bandwidth 100Mb queue { def, low } queue def bandwidth 0Mb cbq(default) queue low bandwidth 100Mb cbq(default) #Passrules test pass out quick from $int_if to $ext_if queue low pass in quick from $ext_if to $int_if queue low pass out quick from $ext_if to $int_if queue low pass in quick from $int_if to $ext_if queue low

    Read the article

  • PDAnet on Android IP on PC is not public IP. Where does the NAT take place, PDAnet or Verizon?

    - by lcbrevard
    When using PDAnet on a PC (Win7 ultimate) to USB tether a Motorola Droid on Verizon 3G the IP address of the PC appears to be public - 64.245.171.115 (64-245-171-115.pools.spcsdns.net) - but connections show as coming from another public IP - 97.14.69.212 (212-sub-97.14.69.myvzw.com). Someone is performing Network Address Translation - either PDAnet or within the Verizon 3G network. Can someone tell me who is doing the NAT? Is it PDAnet or is it at Verizon? Is there any possibility of setting up port forwarding, such that connections to the public IP 97.14.69.212 (212-sub-97.14.69.myvzw.com) are forward to the PC? We are testing a network protocol that requires either a true public IP or forwarding a range of ports from the public Internet to the system on which the software runs (actually Linux hosted by VMware Player or Workstation on a PC running Windows).

    Read the article

  • Testing performance from around the world - how do I get a linux shell easily in multiple countries?

    - by Matthew O'Riordan
    We are building a socket based service where latency is paramount, and as such we have servers distributed into 7 data centres around the world. However, whilst we know we're bringing the servers closer to the clients, it's very difficult to know how effective this is, and importantly, what difference this makes compared to our competitors. As such, we want to run simple scripts that test latency and throughput for both our service and our competitors, which is easy enough using Amazon, however Amazon only have 7 data centres. We would like to know for example how we perform in locations all over the world such as South Africa, Australia, China, Peru etc. Does anyone know of any service where we could piggy back off their global infrastructure and run some scripts to test this performance? The obvious contenders are people like Monitis, but I don't think they would allow us to run custom scripts, only standard protocol monitors. Thanks for your help. Matt

    Read the article

  • Is Page-Loading Time Relevant?

    - by doug
    Take this (ServerFault) page for instance. It has about 20 elements. When the last of these has loaded, the page is deemed "loaded"--but not before. This is certainly the protocol used by our testing service (which is among the small group of well-known vendors that offer that sort of service). Obviously this method is based on a clear, definite endpoint--therefore it's easy to apply w/ concomitant reliability. I think it's also the metric used by the popular Firefox plugin, 'YSlow.' For my employer's website, nearly always the last-to-load items are tracking code, tracking pixels, etc., so from the user's point of view--their perception--the page was "loaded" well before it had actually loaded based on the criterion used by our testing service (15-20% is a rough estimate). I'm sure i'm not the first person to consider this nor the first to wonder if it is causing micro-optimization while ignoring overall system-level, or user-perceived performance. So my question is, are there are other more practical (yet still reasonably precise) measures of page loading time?

    Read the article

  • Using VLC to Unicast High Definition Webcam over local gigabit LAN with low/zero delay

    - by Robin Day
    We're setting up a webcam "window" between two offices in the same buildilng. The two PC's are connected to the same gigabit switch. We're using VLC to stream the webcam over HTTP using the following commands. vlc dshow:// :dshow-caching="0" :dshow-size="640x480" :sout=#transcode{vcodec=h264,vb=0,scale=0}:http{mux=ffmpeg{mux=flv},dst=:8080/} :no-sout-rtp-sap :no-sout-standard-sap :ttl=1 :sout-keep vlc http://192.168.0.1:8080 :http-caching="0" Even with the caching set to zero, the delay in the image is a good 2-3 seconds. The CPU usage of each pc is also maxed. I'm guessing it's the transcoding that's causing much of the delay. Can anyone give me some changes to these command lines that will reduce the transcoding power, or send the webcam over a different protocol, or anything that will reduce the delay of the cameras? Bandwidth is not an issue at all as the pc's can be connected to a dedicated switch/vlan if required.

    Read the article

  • cannot access my own computer through My Network Places

    - by WebMAOhist
    My home Windows XP Pro SP3 machine is DHCP client receiving configurations from ISP. Trying to access in WindowsExplorer -My Network Places - Microsoft Windows Network shows Workgroup with a delay of 3 min and then popups messagebox: Microsoft Windows Network Workgroup is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions.The list of servers for this workgroup is not currently available OK I am logged-in as local machine Administrator. The internet is accessible (I am writing this post through it) The Firewall is disabled The "Computer Bowser" and all networking services, I could find, are running Control Panel -- Network Connections -- Properties (of connection) --- Internet Protocol (TCP/IP), btn Properties --- --- tab General, btn "Advanced..." -- tab WINS-- rbtn "Enable NetBIOS over TCP/IP" checked Why cannot I access my own PC (and shares on it) through My Network Places What is the possible problem? How to daignose the problem?

    Read the article

  • export block device over network without root

    - by dschatz
    I'm trying to export a file as a block device over the network. I do not have root access on the machine where the file exists. I do have root access on the machine(s) where I will mount the block device. I've seen ATA-Over-Ethernet and ISCSI but there don't seem to be any implementations which allow me to export the block without root at least (some even require kernel modules). Is there an implementation of either of these or some other protocol that doesn't require root? Perhaps I can tunnel ethernet over IP to do this?

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >