Search Results

Search found 17683 results on 708 pages for 'side loading'.

Page 578/708 | < Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >

  • Euro character messed up during FTP transfer

    - by djechelon
    My customer is using a very outdated ecommerce management system on my hosting service. For that product, no support is being provided anymore by the vendor. Brief explanation: the shop website, that claims to run under LAMP stack, is built by an old Visual Basic Windows application running on MS Access. The user constructs the shop, defines the HTML template, adds products and categories, etc. Then the VB exe builds the PHP pages (one for each template page) and the SQL script to run on MySQL. It also uploads everything via FTP and runs the installation/upgrade script on its own. The problem Browsing the website, many products' descriptions are cut before the euro sign. For example, what was supposed to be "Product price €1000" becomes "Product price" The analysis MySQL contains a cutted description until the € sign, so it's not PHP fault The Access databases contain full description with € sign, so it's not fault of the webmaster writing bad description or eDisplay cutting them The SQL that will run once the site gets uploaded, stored on my local machine before upload, contains the € sign The same script, after being FTPed by eDisplay and opened with nano from SSH, shows the € sign messed up like this: ^À vsftpd log reports (obfuscated for privacy) Sat Dec 15 11:16:57 2012 22 xxx.xxx.128.13 1112727 /srv/www/domains/xxxxxx.it/htdocs/db.sql b _ i r xxxxxxx ftp 0 * c which seems to be a binary transfer (and also a huge security vulnerability because you can download the whole database from unauthenticated HTTP) The eDisplay internal FTP client provides no option for ascii/binary transfer modes [Add] Trying to manually upload the SQL file via SFTP shows messing up euro [Add2] Trying to manually upload using Xftp client with explicit ASCII mode doesn't fix too It looks like the file gets uploaded as binary. Perhaps on the customer's previous host it all worked fine because that was a Windows host. The server It's an Azure virtual machine running openSUSE 12.2 with both vsftpd and openSSH The question Without asking the customer to manually upload files using FileZilla or replacing € with &euro;, because he refuses, what can I do on server side to prevent vsftpd to screw up euro sign?

    Read the article

  • Windows 8 install app for multiple user accounts

    - by Robert Graves
    I purchased Adera episode 2 intending to play through it with my son. We each have our own user account on the same PC. When my son logged in, he was prompted to purchase the app which I had already purchased, installed, and played on the same PC. So I checked the Terms of Use. After selecting an app in the store, there is a Terms of Use link on the left side under the Install button. It is almost impossible to identify it as a link unless you put your mouse over it. The Terms of Use are standard across all apps in the store, not specific to particular apps. The terms of use indicates that the app may be installed on up to five devices, but says nothing about multiple user accounts on those devices. However, this Microsoft blog article indicates that it is allowed. Say, for example, that your family has a shared PC. You have previously used your Microsoft account to purchase a game that all your kids like to play. You can install it for each of your kids by having each of them sign in to their Windows accounts on the shared PC, then launch the Store and sign in to the Store using your own Microsoft account. There, you’ll see all your apps and you can re-install the app on your kid’s Windows account. Installing apps on multiple user accounts on a shared PC still only counts as one of the five allowable PCs where you can install apps. So I have two questions: Is it permissible under the Terms of Use to install the app under multiple accounts on the same device? If so, how do I do so given that my son has already signed into the store using his own Microsoft account.

    Read the article

  • is there a man in the middle attacking to my server machine?

    - by GongT
    My server works well about half a year. But a strange thing happened (several hours before). This server has two IP-address 58.17.85.19 & 117.21.178.19 When I navigate to http://58.17.85.19, nothing different as before. But http://117.21.178.19 will return a "302 Object moved" and become a "redirect loop" I do some test: ($cmd = "wget http://117.21.178.19/?xx=$RANDOM --max-redirect 0 -S --no-cache -O -") Step by step: run $cmd on my PC and my firend's one (we live in two side of China, far away). - got 302 run $cmd on this server - got 200 OK (content is correct result of index.php) run $cmd on another server in same computer room - got 200 OK telnet from my PC and build an HTTP request (type by hand) - got 200 OK shutdown php-fpm, run $cmd on my PC - got 302 run $cmd on server - 502 Bad Gateway shutdown nginx, run $cmd on both the server and my PC - Connection refused. create iptables rule, refuse any connection to 58.17.85.19:80. run nc -l 80 -k -vvv on server and run $cmd on my PC NC show me that.... Server accept connection (Connection from [my ip]) My connection closed ! (Remove fd xx from list) wget dump out response - got 302 I know that, normaly, NC will accept connection, then dump HTTP request from client, and client will wait for response. this connection will open forever(infact client will close connection becouse timeout), becouse NC can't give any response. So... where my request gone? who send an response to the client? some virus on my server system? If so, why 58.17.85.19 didn't has this error? or... I was attacked by a middleman?

    Read the article

  • Truecrypt files corrupted after moving PC into another case

    - by Dygerati
    I recently bought a new PC case and transferred all of my PC hardware into it. The only hardware modification was the addition of two identical ram modules. The entire process went smoothly, and everything worked and booted as before. The only side-effect I found when accessing one my of file-based hidden truecrypt volumes shortly there after. Some of the files in the volume - NOT all - seemed to be entirely corrupted. The directory and file names are garbled characters, but a few of the directories in the same volume appear and function normally. Also, all files in the non-hidden tc volume were still intact. Is this not weird? The only other real change I could think of would be that the hard drives were connected to different SATA ports on the mobo. I really don't know how the truecrypt encryption works well enough to know what could cause this...and the fact that not all the files were corrupted makes it more bizarre still. So, first off (and I'm not too hopeful on this point), would it be possible to restore these files? I had a backup of most, but not all of the files involved. Other than that I'm just curious how this happened and how I can prevent it next time. Thanks!

    Read the article

  • How can I optimize my ajax calls to deliver at 60ms.

    - by Quintin Par
    I am building an autocomplete functionality for my site and the Google instant results are my benchmark. When I look at Google, the 50-60 ms response time baffle me. They look insane. In comparison here’s how mine looks like. To give you an idea my results are cached on the load balancer and served from a machine that has httpd slowstart and initcwnd fixed. My site is also behind cloudflare From a server side perspective I don’t think I can do anything more. Can someone help me take this 500 ms response time to 60ms? What more should I be doing to achieve Google level performance? Edit: People, you seemed to be angry that I did a comparison to Google and the question is very generic. Sorry about that. To rephrase: How can I bring down response time from 500 ms to 60 ms provided my server response time is just a fraction of ms. Assume the results are served from Nginx - Varnish with a cache hit. Here are some answers I would like to answer myself assume the response sizes remained more or less the same. Ensure results are http compressed Ensure SPDY if you are on https Ensure you have initcwnd set to 10 and disable slow start on linux machines. Etc. I don’t think I’ll end up with 60 ms at Google level but your collective expertise can help easily shave off a 100 ms and that’s a big win.

    Read the article

  • Linux Distro - GUI similar to Windows

    - by DeaconDesperado
    I am in the process of refurbing several older laptop machines for use by a couple college guys we have in training to learn basic web development in python. These are students who intern at my company and are hoping to do some work when the summer comes building simple client-oriented webapps (learning the basics of OOP, MVC webapp design in flask, etc.). We're trying to function as the "practical" side of their education. I would like to get them set up on these machines we have sitting about, but I'd like to use a linux distro that would have a gui that closely approximates what they are being compelled to use at school (windows.) I don't really have much of a preference as far as GUI goes since much of what we'll be learning together is accomplished on the command line. I just see this as an easier adjustment for them while they are still reliant on a graphical environment. In the past I'd go straight for Ubuntu, but since they started using the Unity GUI the responsiveness overall can be pretty clunky on older machines, especially since these machines (there are four of them) run the gambit on specs (though all are at least 1.0Ghz and none have anything better than basic integrated video.) Has anyone had to setup a similar working environment in Mint, bare Debian or Zorin? Thanks.

    Read the article

  • NAS for Mac OS X Server

    - by SamAdmin
    I'm using Mac OS X Server and want to allow the users that connect to their network accounts to store their data on a NAS drive. I want the users to connect to the Lion server as this allows for better policies and management for me and for their afp share to be located on a NAS drive. I've looked into home directories and network logins however I don't want the users to connect into a different login environment, just an authentication against their provided account on the Lion server and for their finder to take them to their own storage area - located on the NAS drive. Currently I am using FreeNAS for both authentication and storage however there are getting to be far too many people to manage each afp share and account, plus just using FreeNAS is extremely limiting for expansion and if something goes wrong with 1 entity the entire system goes down. Using the Lion server for user accounts and policies will be much better for this expanding business. I have looked into LDAP, using the Lion server as an LDAP server to authenticate against for FreeNAS however I have had issues with this and thought a different approach could be better from the other side of the situation... Providing the account with somewhere to store data rather than the afp share authenticating against an LDAP server. I am wrong to try it this way? Is it possible to logically add storage to a Mac OS X Server which can be recognised as a local drive, so can be used for network accounts?

    Read the article

  • Print over remote CUPS server, but just show a subset of the printers.

    - by jdm
    I'd like to print from my Ubuntu laptop (karmic) to some networked printers. Our organisation uses a CUPS server with several hundred printers. What I know I can do is: CUPS_SERVER=printers.company.com acroread document.pdf and then Adobe Reader shows me all available printers to select from. However, it takes a couple of minutes to display the large list, which is really annoying. (The desktop PCs here suffer from this, too.) The other option is to add a new printer with an address like ipp://printers.company.com/printer/bldg1_hp8150 (to the Ubuntu printer configuration = local CUPS server). However, it asks me for a driver. I don't want to / can't always specify a driver, since some printers don't appear in the list. I'd like to let the remote CUPS server handle the driver part (like it does when i set CUPS_SERVER), and do no more preprocessing/"driver stuff" on my side. The ideal thing would be if I could somehow add the remote printer list to my local cups server, and apply a filter, so that it would just display printers a la bldg1_*. This feature was available in KDE3.?, but I can't find something similar in Ubuntu/Gnome. Any suggestions?

    Read the article

  • What to do about old MacBook

    - by John
    I have a White MacBook version 10.5.8. I accidentally dropped it and about an inch of the most right side of the screen is blacked out. I can't see the clock and Trash Can because of this. I checked how much it would cost to repair, and the cost of repairing is a flat rate of $300. I thought that was kind of expensive, like a third of the cost of the MacBook itself. accidentally dropping a notebook is something that would happen sooner or later, so I think getting a Lenovo is the best. I ended up getting a MacBook pro 15 inch, but do you think it's worth the money to fix my old MacBook? I even saw some used MacBooks available on Ebay for about $400. Anyways I don't know what to do with it. I do like its screen and keyboard better than the MacBook Pro ones, because the screen is less glare-y and the keyboard is more crisp and fun to type on, and its more wieldy when I'm using it while lying down.

    Read the article

  • Single application through OpenVPN tunnel (Debian Lenny)

    - by user14124
    I'm using Debian Lenny and I want to tunnel rtorrent only through a OpenVPN tunnel. I have a tunnel running, the config file looks like this: client dev tun proto udp remote openvpn.xxx.com 1194 resolv-retry infinite nobind persist-key persist-tun ca /etc/openvpn/xxx/keys/ca.crt cert /etc/openvpn/xxx/keys/client.crt key /etc/openvpn/xxx/keys/client.key tls-auth /etc/openvpn/xxx/keys/tls.key 1 ns-cert-type server comp-lzo verb 3 auth-user-pass script-security 3 reneg-sec 0 My idea is that I could run a sockd proxy internally that redirects traffic to the openvpn tunnel. I could use the *nix "proxifier" application "tsocks" to make it possible for rtorrent to connect through that proxy (as rtorrent doesn't support proxies). I have trouble configuring sockd as my IP inside the VPN changes every time I connect. This is a config file someone said would help: http://ircpimps.org/sockd.conf As my IP changes at each connect I don't know what to put in that config file. I have no control over the host side config file. Any help wanted. Any other method is very welcome.

    Read the article

  • Cron job checking for changes in Git repository

    - by HNygard
    We have just moved our server configs to a Git repository. Therefore there should not be any changes in any of the repository folders. I was thinking about how I could set up a cron job to check for any uncommited changes. How could a cron job be set up to check for changes in a Git repository? Greping the output of the git status command might just do it. Grep and cron jobs are not my strong side. Here are some sample outputs from git status: Standing the folder containing the git repository (e.g. /path/gitrepo/) with changed files: $ git status # On branch master # Changes not staged for commit: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: apache2/sites-enabled/000-default # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # apache2/conf.d/test no changes added to commit (use "git add" and/or "git commit -a") Standing in the folder when there is no changes: $ git status # On branch master nothing to commit (working directory clean) Update: Synced up with origin is not important. There should be no local changes. Local files that must be in place go into the .gitignore file. In addition to the server configs there are also git repos for content (static web sites, web apps, wordpress, etc). None of the repositories should have local changes. We might use Puppet in the long run since its being used for development of one of the web apps.

    Read the article

  • MacBook Pro (2007 - ATI-Card) Screen goes half black then shuts down the entire computer

    - by BluePerry
    Hey, when I'm starting my MBP I get about 10-60 seconds of the booting process before the screen goes half black (Its like applying a black to transparent gradient from the left side of the screen to the middle of the screen). After a 5 seconds the entire screen blacks out and the MBP shuts down. There have been issues with heat and some minor artifacts on the screen in the past but nothing serious as far as I know. Everything could be resolved by rebooting or letting the MBP cool down a bit. I would like to identify the problem now. There are 3 possibilities I can think of right now: My graphics card is defect and shuts down after a few seconds. (But if thats true, I find it kind of hard to explain the HALF black screen) My LCD-Panel is defect and sends back wrong signals so the system shuts down??? (I not sure about that) My fan or/and thermal sensors are defect and forcing the graphics card to shut down. Can anyone point if I'm right or if there yet another reason for this. I'd be thankful for any hint or tips. Cheers, Per

    Read the article

  • How to fix a damaged/corrupted NTFS filesystem/partition without losing the data on it?

    - by Gareth
    I was going to install Fedora 15 along side my Windows 7 Starter on my Acer Apire One D255E and at some point during the resizing of the NTFS partition (the one with Windows on it) the setup failed. Now I cannot access this partition from any OS. When I tried to access it from a Fedora install running on a USB flashdrive I get this error: Error mounting: mount exited with exit code 12: Failed to read last sector (452534271): Invalid argument HINTS: Either the volume is a RAID/LDM but it wasn't setup yet, or it was not setup correctly (e.g. by not using mdadm --build ...), or a wrong device is tried to be mounted, or the partition table is corrupt (partition is smaller than NTFS), or the NTFS boot sector is corrupt (NTFS size is not valid). Failed to mount '/dev/sda5': Invalid argument The device '/dev/sda5' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? It doesn't make a lot of sense to me but I was really hoping it would to someone and they can give me a way to restore the partition without losing everything on it (I have a lot of important notes from various classes on there)? Cheers.

    Read the article

  • Nexenta, NFS and LOCK_EX

    - by Givre
    I'm currently using an LAMP architecture and I expect a big problem :( I have several http web server using PHP5. All are mounting via NFS (v3) the directory for all the hosted websites. The file server is running the Nexenta Storage Appliance using ZFS . The problem is all the NFS client trying to write something in a file over the NFS get this problem : This is inside the apache2 process: open("/nfs/website1/file.txt", ORDWR|OCREAT, 0600) = 11647 fstat(11647, {stmode=SIFREG|0600, st_size=23754, ...}) = 0 flock(11647, LOCK_EX And the process never get the LOCK and keep waiting for... always. The effect? All the apache2 procces get used and waiting.. my severs can't still proccess the others requests because there is no more proccess available. I don't now where to find a solution.. for me it.'s on the NFS server side.. but wich configuration is wrong or missing ? How can I find what is wrong? If you need more information about the configuration, just ask me what can help you more :)

    Read the article

  • How to track things that SHOULD happen, but might not have

    - by Kamiel Wanrooij
    I am running into a couple of issues with some applications we've deployed and maintain. I have the feeling we have approached this with some anti-patterns up to now, but I would like to see how to make this more flexible and stable. In one situation, we have a server at a client which pushes data to us to parse every night (yes, Windows Task Scheduler). This is highly unstable however, so once every month this doesn't happen because of reasons out of our control. This heavily impacts our business since we run with stale data in that situation. In another scenario we have a lot of background job processes that should be running. We already keep them up using bluepill ( http://www.github.com/arya/bluepill ) but obviously restarts happen, both automatically and manually, and people forget things or systems mess up. What I would like to track is events that should occur or should be available. Like the existence of a process, the execution of a program, or the creation/age of a file, and track it when they don't happen or exist. We develop most things in Ruby on Rails, use NewRelic, Bluepill and Munin, and run on Ubuntu. I've been toying around with counting ps aux | grep processname | wc -l in Munin scripts, or capturing the age of a file and raising alerts over 24-26 hours, stuff like that. Is there better tooling to track things that should happen, and raise alerts if they don't? P.S. I know some things are suboptimal, like manually having to define bluepill for applications and then forgetting to do so. The same goes for the push based approach of the first application, a dedicated daemon that manages that on the client side that we control and can track its connection to us might be a much better solution.

    Read the article

  • does leaving the laptop cord in break the cord?

    - by firedrake
    i recently got a new cord for my laptop because the cord i had before broke.im not sure how it got messed up, but one moment it charged my laptop and the next moment it didn't. i then got a new cord. the new cord worked perfectly fine until now, and it has only been a month. the cord wont work unless i am pressing it into the socket, which i am currently doing. the only thing i can find in common with the cords is that i leave them plugged in 24/7. my brother says that is the problem, but i do not think it is.any tips or hints? im going to use duct tape to keep the pressure on the cord till i can find a better solution(im also thinking it could be the hole i plug it into on the back of my computer, but im focusing on the other idea for now.if i wiggle the plug part in the socket of my computer it will stop charging unless im pressing it in or to the side) any help or ideas are appreciated. not sure if this will help but i use a gateway with windows vista. thanks

    Read the article

  • Updating files with a Perforce trigger before submit [migrated]

    - by phantom-99w
    I understand that this question has, in essence, already been asked, but that question did not have an unequivocal answer, so please bear with me. Background: In my company, we use Perforce submission numbers as part of our versioning. Regardless of whether this is a correct method or not, that is how things are. Currently, many developers do separate submissions for code and documentation: first the code and then the documentation to update the client-facing docs with what the new version numbers should be. I would like to streamline this process. My thoughts are as follows: create a Perforce trigger (which runs on the server side) which scans the submitted documentation files (such as .txt) for a unique term (such as #####PERFORCE##CHANGELIST##NUMBER###ROFL###LOL###WHATEVER#####) and then replaces it with the value of what the change list would be when submitted. I already know how to determine this value. What I cannot figure out, is how or where to update the files. I have already determined that using the change-content trigger (whether possible or not), which "fire[s] after changelist creation and file transfer, but prior to committing the submit to the database", is the way to go. At this point the files need to exist somewhere on the server. How do I determine the (temporary?) location of these files from within, say, a Python script so that I can update or sed to replace the placeholder value with the intended value? The online documentation for Perforce which I have found so far have not been very explicit on whether this is possible or how the mechanics of a submission at this stage would work.

    Read the article

  • Win 7 Remote Desktop connection failure when already logged in.

    - by Andy E
    I have a bit of a strange problem, magnified recently by my broadband dropouts. I wasn't sure whether to post this on SU or SF, so I thought I'd start here as more users would be likely to know what the problem is. In short, when I try and connect to my server (Windows Server 2008) from my laptop running Windows 7, I can only connect if my remote account was previously logged out. If I'm still logged in I get the error message: Windows cannot connect to the remote server. No explanation or anything. If my IP address is the same, I don't have this problem. If I boot up Windows XP Mode and run XP's remote desktop connection it works just fine -- I think the difference there is it takes me to the remote server's logon screen. With Win 7 RDC you never see the logon screen, it asks you for credentials before entering full screen mode. The real problem is that I'm having random broadband dropouts and my IP isn't static. If I logon via Win XP RDC, log out and then run Win 7 RDC then it works fine. I realize I can just use Win XP's RDC for now, but I don't really like keeping XP Mode open if I can help it. Does anyone know a way around this problem? Maybe forcing Win 7 RDC to go to the logon screen, or changing some server-side settings to work around the IP address issue?

    Read the article

  • Excel hyperlinks can be attached to a range of cells -- what is the use case for this?

    - by John Machin
    In Excel 2003 and 2007 (and presumably 2010), it is possible to attach a hyperlink to a single cell; this is well known. Excel also allows you select a range for insertion. In that case, clicking on any cell in the range will jump to the target of the hyperlink. I can't find any web reference to this possibility. My question is: What is the use case for being able to do this? My only suggestion: The first worksheet is a menu for the remainder of the workbook. Each worksheet or topic has a hyperlink on the menu sheet. Each hyperlinks occupies a 3x3 range of cells to make it easier for users in a hurry to click on the correct link. A side question: Interestingly, Excel allows you to overlap ranges. Example: Link from A1:C3 to file1. Then link from B2:D4 to file2. The overlapped cells (B2:C3) now point to file2. Only A1, A2, A3, B1, and C1 now point to file1. No warning is given about the overlap. What is the rationale for this behaviour?

    Read the article

  • Linksys SFE2000 Interface

    - by boburob
    I have a real problem with a layer 3 Linksys switch. First, every time it looses power, it seems to reset back to an older config. Not only this, but when this happens it looses interface settings on one subnet. This would not be a problem but I am completely unable to get to the interface on the working side. It allows me to log on and then just displays a blank screen. I have tried this on: IE6 IE7 IE8 IE9 Firefox 3.5 Opera Chrome All with the same results, except for Opera, which loads half the interface but nothing I can really use. I really need to get onto this switch so I can sort out routing and VLAN tagged ports, so if anyone has any ideas on either of these issues please let me know ASAP! Thanks! Also, due to its location and my lack of laptops with serial connections I cannot putty into it. UPDATE: Looked into this a bit more and it looks like this model of switch does not save the current config to boot unless you make sure to save it yourself, which explains the first issue, however the broken interface is more worrying!

    Read the article

  • Are there any benchmarks showing difference between hardware virtualisation enabled/disabled?

    - by Wil
    I have a 13" sub-laptop/large-netbook, it has an AMD Athlon Neo X2 L335, and I chose this one because it supports hardware virtualisation. In the end, I hardly do any virtualisation on it, however, when I do... it is fast. To my shock, I went in to the BIOS and saw that virtualisation was disabled! I turned this on and, I see no speed difference.... or at least none that I can tell. I do not have time to do a full set of benchmarks - and I run quite a bit of software on the host, so it wouldn't be scientific. I have searched quite a few places and I just can not find any benchmarks showing the difference of virtualisation bit enabled/disabled on the same hardware. Does anyone have any benchmarks they have seen that they can share? In addition, I know there was an uproar a while ago as Sony disable the hardware virtualisation on some models and only offer it in their higher models as a premium feature, however, apart from forcing an up-sell, are there any benefits to having it disabled e.g. battery/heat? I just can't find any information and can't work out why it would be disabled by default. Edit--- To add, The only thing I can find is that without it, you can not perform x64 virtualisation as fast. This is the only down side I can find. However, if this is the only difference, then I am still interested in the second part of the question - why offer the option to disable it?

    Read the article

  • Can only ssh when not using wifi

    - by AChrapko
    So I have 3 machines, a windows 7 desktop that is always wired to my router, osX laptop, and raspberry pi running debian linux. My router is a Linksys e1000 wireless N. My goal is to be able to ssh the raspi from any machine, while it is connected via wifi. My problem is that when trying to ssh from either the win7 or osX to the Pi it either times out, or gives an error: "ssh: connect to host 192.168.1.### port 22: No route to host" The only times that I have managed to connect to the pi from any machine were when it connected to the router via an Ethernet cable. Currently with win7 desktop wired, macbook wireless, and pi wireless tests give the following: win7 ping macbook: Destination host unreachable. macbook ping win7: Request timeout. win7 ping pi: Destination host unreachable. macbook ping pi: Request timeout. blah blah blah Plugging the macbook into the router with an Ethernet cable all communication between win7 and macbook works. Pings, ssh, ftp, smb ect... No changes to the pi, still no connections possible to or from any of the other 2 machines. Note All machines, are able to connect to the internet and ssh to the same machine on a completely different network, wired or over wifi. Plugging the Pi in with Ethernet (and macbook still wired) I can ssh to the pi from both win7 and macbook. I can ssh from the pi to macbook. All machines still able to connect the the off network machine. Also another little side note- I was playing warcraft 3 with my roommates the other day, and the only time they were able to see my LAN game was when they were plugged into the router with an Ethernet cable. Once or twice one of the laptops was able to connect over wifi, but not without another computer connecting first via Ethernet. So basically does anyone have any info as to why my router seems to completely ignore local wireless traffic?

    Read the article

  • Internet compression proxy for low speed broadband?

    - by user23150
    I live in a rural location, using high-latency wireless off a local ISP's tower. My speed tests vary day to day, but I can get around 1Mb up/down. The problem is, I work with large files, uploading and downloading (HD videos, development software, etc.). It can be painful to wait sometimes. Plus I do some side contract game development, and it can be very difficult to playtest with other developers (200ms ping is a good day for me). Now, obviously it's not going to be easy to solve the latency problem without different wireless hardware. But speedwise, I am wondering if I can use some kind of compression technology on a proxy. For instance, my work computer has full access to a 26Mb down, 10Mb up connection, that is totally unused at night and the weekends. If I could run some kind of compression technology on our server, and use it as a proxy to route to my home computer, I could stand to gain some major speed. I realize that by bogging down a system with compression, I could potentially lose whatever speed gain I had. But the proxy server is a quad core xeon, and the receiving computer is a pretty decent i7 computer, so that shouldn't be a concern. I found http://toonel.net/ but it seems more geared toward very slow narrowband users, like dial-up. Plus, I would prefer to just be able to point my browser to a proxy server, rather then install software on my client machine. EDIT I thought about my question a little more, and realize I am going to need to install software on my client in order to decompress, and possible compress (for uploading). That's not a huge deal.

    Read the article

  • How can I diagnose what's causing Outlook 2007 when sending an attachment to fail with error 800CCC0F even though the message was sent?

    - by James
    As the title suggests, I've got an issue where outlook 2007 is reporting it failed to send email with error 800ccc0f (unexpectedly terminated connection) but only with attachments. The email is actually sent, but outlook keeps retrying (stays in the outbox), generating more emails to the original recipient (which do get delivered) I've got QMail on the server side supporting a half dozen domains. It doesn't appear to matter which account I send from. I can successfully send attachments via alternate mail clients (webmail, thunderbird) while outlook is failing, or send messages without attachments; so it's seemingly not the accounts themselves or serverside, which leaves outlook as the culprit. There doesn't appear to be any pattern to the failures, and it's not consistent (I successfully sent an attachment as recently as 3 weeks ago) so I'm at a loss as to where to look. Qmail logs don't look any different between successes and failures. Has anybody seen this before/have a solution? UPDATE : It appears it's only PDF files that this occurs with, so I'm even more stumped. I can send html/docx/txt and zip, UNLESS the zip file contains a pdf ... whiskey tango foxtrot

    Read the article

  • Firefox Does NOT get local site cookie

    - by Campo
    This is a weird one. We have a production server (Server 2008) and two staging servers (Server 2008 and Server 2003) I have sites on all of these. They all use cookies. On the Production server when browsing to our site www.supernovainteractive.com there is a cookie that detects when you visted the site and it will not refresh the logo animation (top left hand side) on clicking to another page. This works for all browsers on the production server. I’m not sure what’s going on but for some reason cookies are not working on one site in the 2008 staging server only. This is when browsing using Firefox (3.6.3) they work fine on all other browsers (IE, Chrome, Safari, Opera) In addition, the 2003 staging server works fine. You can test on the Supernova Interactive site by noticing the logo in the top left corner. It uses a cookie to detect if you’ve already seen the animation. Once you’ve seen it once, it doesn’t animate again until tomorrow. Currently, it’s animating every time. I have opened an outside facing port so others can see the issue. Http://exchange.supernova.com:10009 Any ideas on this one? Firewalls are off on the server. Notice you do not get a cookie from Exchange.supernova.com.

    Read the article

< Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >