Search Results

Search found 8268 results on 331 pages for 'difference'.

Page 248/331 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • Unable to connect to CopSSH when running Windows service, works when running sshd directly

    - by Joe Enos
    I've been using CopSSH (that uses OpenSSH and Cygwin, so I don't know which of the three is the problem) as my SSH server application at home on Windows 7 Ultimate 32 bit. I have used it for about a year with no real problems, other than it sometimes takes 2 or 3 connection attempts to get through, but it's always worked within a few attempts. A few days ago, it just stopped working. The Windows service is still running, and I've rebooted, restarted the service, etc. with no change. On the client (using Putty on Windows), I get the message "Software caused connection abort". On the server, my event viewer registers the following: fatal: Write failed: Socket operation on non-socket I finally got it working, but only by executing sshd.exe directly from the command line on the server. No special flags or options, just straight execution, and then when I connect remotely, it goes through. I do have firewall and anti-virus software which appears to be configured properly, but the fact that things work when running sshd.exe also indicates that the firewall is fine. I thought the service and executable did exactly the same thing, but apparently there's some difference. Does anyone have any ideas on where I should look for the problem? If I can't find something, I suppose I can write a Windows service or scheduled task that fires off sshd.exe directly and ensures that it stays running, but that's kind of a last resort, since it's just wrapping around something that should already work. I appreciate your help.

    Read the article

  • Windows XP mounting USB drive to same letter as previously mapped network drive

    - by GAThrawn
    Why does Windows always mount a USB drive as the next drive letter after the last physical drive, even when that letter is already taken by a mapped drive, and is there any way to improve this behaviour? What happens is I tend to use a few different flash drives on my PC, as well as having both a Blackberry and a personal phone that mount as USB drives when I plug them in to charge. Being on a corporate PC I also have a number of mapped network drives (some set by login script, some set as persistent mappings in my profile). When I first login I'll have drive letters like this: C: - Local Drive D: - DVD Drive G: - Login script mapped drive J: - Login script mapped drive When I plug the Blackberry in it'll mount two drives (one for onboard storage, one for the SD card) as E: and F:. If I then plug in another USB drive it will mount as G:, even though that's already taken by a network mapped drive. This leaves me with the following drives: C: - Local Drive D: - DVD Drive E: - USB drive (Blackberry) F: - USB drive (Blackberry) G: - Login script mapped drive [G: - USB drive - mounted but not visible in Explorer or command prompt] J: - Login script mapped drive I then have to go into Disk Management, find the new USB drive that's mounted to G: and re-assign it to another letter eg Z:, once this is done Auto-Play detects it and throws up its normal dialog, and its browseable in Explorer. While this is OK to do if you only use one or two USB drives and have admin access to your PC with your login account, its a total pain in the proverbial if you regularly use a whole load of different USB devices, and corporate policy means you have one account for your normal login (that only has User access to workstations), but have to use a different account for any privileged action. I realize that one possible reason for this is the difference between hardware which is mounted and assigned drive letters at the systen level, and mapped drives which are done at the user level. For USB devices that are already plugged in before login, then obviously they're mounted before Windows knows what network drives may be mapped. However if you plug the USB devices in after you're fully logged in and have drives mapped then Windows must know which letters are available?

    Read the article

  • Body of email breaks distribution list in exchange?

    - by widgisoft
    Hi, I have a very odd problem that I'm not sure is a programming issue or a server issue :-p. Basically I'm sending an email to an exchange distribution list that includes a PHP stack trace; during certain faults the trace includes really high level information such as the machine's environment variables (during file reads, etc.). I went through a copy of the email line by line until the email sent and it appears the line: [SUDO_COMMAND] => /etc/init.d/httpd restart is the culprit. Adding a string replacement in before the email is sent allows a successful send. What I don't understand is WHY these stream of characters are causing the issue ONLY on the distribution email. If I send the email to myself as well, i.e. "[email protected]; [email protected]", then I get the email fine. Re-ordering the list doesn't make a difference the group never gets the email. Because the individual gets the email and not the group I'm assuming the fault is with exchange and some rogue filtering - I've gone through it with the sysadmins and there's no filtering of any sort on that group... so maybe it's a bug? I can't find anyone else having recorded this specific fault so I figured I'd open it here. For now I'm just not using the distribution list but it'd be nice to eventually find the solution. Many thanks, Chris

    Read the article

  • lxc bandwidth control using tc

    - by kumar
    I am trying to restrict bandwidth inside my containers. I have tried using the following commands , But I think it is not getting effective. cd /sys/fs/cgroup/net_cls/ echo 0x1001 > A/net_cls.classid # 10:1 echo 0x1002 > B/net_cls.classid # 10:2 tc qdisc add dev eth0 root \ handle 10: htb tc class add dev eth0 parent 10: \ classid 10:1 htb rate 40mbit tc class add dev eth0 parent 10: \ classid 10:2 htb rate 30mbit tc filter add dev eth0 parent 10: \ protocol ip prio 10 \ handle 1: cgroup Here A and B are containers created with this command. lxc-execute -n A -f configfile /bin/bash lxc-execute -n B -f configfile /bin/bash Whereas configfile contains only this entry: lxc.utsname = test_lxc AFter starting the container , I have started vsftpd inside container A and try to access the files using the ftp client from another machine. Then I killed vsftpd in container A and started vsftpd in container B and try to access the files using ftp client from another machine. I cannot observe any difference in performance, for that matter it is nowhere nearer to 40mbit/30mbit. Please correct me whether anything wrong here.

    Read the article

  • Why is Mac supposedly better than Windows for graphics?

    - by Svish
    Ok, people just keep telling me that if you're going to be working with graphics and design and stuff, you should get a Mac. And I just don't get the logic. Because most of these people would be working with Adobe software, which are for both Windows and Mac. To me it seems like their whole argument is based on that "everyone else does". Like, Mac had some graphics software that Windows didn't earlier in history, so most people were using Mac. And since most people were using Mac, new people also started using Mac. And since most people were using Mac, schools and universities used Mac. Which taught new people to use Mac. So they were using Mac. And told everyone they met that everyone they knew were using Mac. And so on. Anyways... What is the deal really? Is there actually any advantage in using Mac for graphics and design and such things? My take is that you pretty much have the same software and both Mac and Windows are powerful enough, support enough RAM, are stable (as long as you don't install lot's of junk or faulty drivers), et cetera. So, can anyone give me a good explanation on this? Is there a real difference or are people just brainwashed?

    Read the article

  • How to detach a sql server 2008 database that is not in database list?

    - by Amir
    I installed SQL Server 2008 on Windows 7. Then I created a database. After 2 days I reinstalled Windows and SQL Server. Now I am trying to attach my database file, but I have encountered the error below. I think that the files are like an attached file and I can't attach them. What is difference between an attached file and a non-attached file? How can I attach this file? Please Help Me. Error Text: TITLE: Microsoft SQL Server Management Studio Attach database failed for Server 'AMIR-PC'. (Microsoft.SqlServer.Smo) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.1600.1+((KJ_RTM).100402-1540+)&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Attach+database+Server&LinkId=20476 ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) Unable to open the physical file "F:\Company.mdf". Operating system error 5: "5(Access is denied.)". (Microsoft SQL Server, Error: 5120) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.1600&EvtSrc=MSSQLServer&EvtID=5120&LinkId=20476

    Read the article

  • LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?

    - by Michael Pearson
    We're provisioning a new production server for our reasonably busy website. Our choice of host have available a 6 drive configuration with a LSI 9260-8i card. My initial thought was to fill all six bays with SSDs (Intel 520 256gb) and set them up in RAID. Good, bad, or terrible idea? Can the card handle it? Should we be using RAID 5, 6 or 10? This would be the first time the provider have filled all six slots for this rackmount with SSDs, so they're a bit hesitant. I'm wondering if somebody else with this card has done something similar in a production environment. We do about 43gb of writes per day and currently use about 300gb of storage. The server acts as webserver, database, and image store for approx 1 million files. The plan is to underprovision the SSDs by approximately 10% to 20% to increase their overall lifespan & performance. The fallback option is 2x480gb SSDs in RAID 1 and another 2x1TB HDDs in RAID 1. The motivation behind this is that the server rental cost difference between 2xSSDs and 6xSSDs is minimal (compared to the overall cost of the rental). We do not have any special high-IOPs requirements. However, if the configuration is known to work, I don't see a good reason to not use it and not have to worry about having separate 'fast and small' and 'slow and large' disks.

    Read the article

  • Websites down EC2 inaccessible via SSH CPU utilisation 100% last few hours - what should I do?

    - by fuzzybee
    I have multiple websites hosted on 1 single EC2 instance. 1 website "abc" were down for a few hours, sometimes threw database connection error and sometimes just took too long to respond. 1 website "def" were incredibly slow but still up and running the rest of the websites had the same symptoms has "abc" I can afford 15 min or less down time for "def". Should I then (in AWS console) reboot my instance or create an AMI image from my instance and launch it and associate my elastic IP to the new instance or "launch more like this" Background on what may have happened to my ec2 The last time I made changes for 21 hours ago. A cronjob to create snapshots ran around 19 hours ago and it has been running for a long time. Google Analytics shows traffic to my websites such as kidlander.sg has been nothing exceptional. Is there any other actions I should take or better options I could have? (I have already contacted AWS support but their turnaround is 12 hours so I appreciate all the help I could get) Update I got everything back up and running and CPU utilisation back to normal, around 30%. There is 1 difference between "def" and "abc" as well as my other websites "def"'s database is hosted on RDS "abc"'s database is hosted on an EC2 instance (different from my web server instance) configured by myself Nevertheless, I checked the EC2 instance I'm using as MySQL server yesterday and it was absolutely fine during the incident low CPU ultilisation I could log in using linux command line

    Read the article

  • How to connect 2 routers (Asmax and D-link) RJ11 vs RJ45 issue

    - by piobyz
    I just bought a new router, D-link DSL 2641B and want to connect it to another one, provided by my ISP, Asmax AR 804MP. Previously, I had Linksys WRT350N, and there was no problem, while I had Ethernet cable plugged in to one of LAN ports in Asmax and INTERNET(RJ45) port in Linksys, connection used PPPoE protocol -- worked OK. D-link has DSL(RJ11) port (which I don't want to use as Asmax replacement, while there is a separate Ethernet cable with a TV plugged to Asmax, which I don't want to configure from scratch on D-link). How should I connect my new D-link to work with Asmax? Via DSL port? Via one of the LAN ports (in which case I probably should change the purpose of this port in the config, I guess?). I tried connecting D-link both ways: LAN(ASMAX) to LAN(DLINK) LAN(ASMAX) to DSL(DLINK) (using RJ11 - RJ45 cable) I hope there is some setting in the DLINK's config that I overlooked. I haven't tried to see what's in ASMAX's config, but I guess I don't need to change anything there, while Linksys worked just fine? The only difference I see, is that D-link has RJ11 DSL port as WAN, and Linksys has RJ45 (called by them INTERNET) as a main WAN port.

    Read the article

  • Virtual (ESXi4) Win 2k8 R2 server hangs when adding role(s)

    - by Holocryptic
    I'm trying to provision a 2k8r2 Enterprise server in ESXi4. The OS installation goes fine, VMware tools, adding to domain, updates. All the basic stuff before you start adding Roles and Features. I've had this happen on two attempts already, and I'm not sure where the problem might be. I don't think it's hardware, because I have another 2k8r2 Standard server that's running fine. The only real difference is the install media. The server that's working was installed using a trial ISO and license. The one I'm having problems with is a full MAK installation. When I go to add a Role (the last case was Application Server) it gets all the way to "collecting installation results" before it hangs. CPU utilization in the vSphere client shows little spikes of activity with flatlines inbetween, but the whole console is locked up. The only way to release it is to power off and bring it back up. When you go to look at the added roles after bringing it back up, it shows that it is installed, but I don't trust that something didn't get wedged in all of that. The first install I did was with Thin Disk provisioning. The second attempt was with regular disk provisioning. In both cases 4GB of RAM, 2 vCPUs. VMware host is a HP Proliant DL380 G6, RAID-1 OS, RAID-5 data volume. 12 GB RAM. Has anyone else had this problem, or know where I should start poking around?

    Read the article

  • How to connect 2 routers (Asmax and D-link) RJ11 vs RJ45 issue

    - by piobyz
    I just bought a new router, D-link DSL 2641B and want to connect it to another one, provided by my ISP, Asmax AR 804MP. Previously, I had Linksys WRT350N, and there was no problem, while I had Ethernet cable plugged in to one of LAN ports in Asmax and INTERNET(RJ45) port in Linksys, connection used PPPoE protocol -- worked OK. D-link has DSL(RJ11) port (which I don't want to use as Asmax replacement, while there is a separate Ethernet cable with a TV plugged to Asmax, which I don't want to configure from scratch on D-link). How should I connect my new D-link to work with Asmax? Via DSL port? Via one of the LAN ports (in which case I probably should change the purpose of this port in the config, I guess?). I tried connecting D-link both ways: LAN(ASMAX) to LAN(DLINK) LAN(ASMAX) to DSL(DLINK) (using RJ11 - RJ45 cable) I hope there is some setting in the DLINK's config that I overlooked. I haven't tried to see what's in ASMAX's config, but I guess I don't need to change anything there, while Linksys worked just fine? The only difference I see, is that D-link has RJ11 DSL port as WAN, and Linksys has RJ45 (called by them INTERNET) as a main WAN port.

    Read the article

  • How to connect 2 routers (Asmax and D-link) RJ11 vs RJ45 issue

    - by piobyz
    I just bought a new router, D-link DSL 2641B and want to connect it to another one, provided by my ISP, Asmax AR 804MP. Previously, I had Linksys WRT350N, and there was no problem, while I had Ethernet cable plugged in to one of LAN ports in Asmax and INTERNET(RJ45) port in Linksys, connection used PPPoE protocol -- worked OK. D-link has DSL(RJ11) port (which I don't want to use as Asmax replacement, while there is a separate Ethernet cable with a TV plugged to Asmax, which I don't want to configure from scratch on D-link). How should I connect my new D-link to work with Asmax? Via DSL port? Via one of the LAN ports (in which case I probably should change the purpose of this port in the config, I guess?). I tried connecting D-link both ways: LAN(ASMAX) to LAN(DLINK) LAN(ASMAX) to DSL(DLINK) (using RJ11 - RJ45 cable) I hope there is some setting in the DLINK's config that I overlooked. I haven't tried to see what's in ASMAX's config, but I guess I don't need to change anything there, while Linksys worked just fine? The only difference I see, is that D-link has RJ11 DSL port as WAN, and Linksys has RJ45 (called by them INTERNET) as a main WAN port.

    Read the article

  • Unzipping archives, preserving folder hierarchy

    - by Hydrangea
    I've got a problem and am not sure what it is, but hope someone can help me think this through because this has me stumped. Backstory: I wrote a Java app (Android) that unzips some zip files downloaded from the network. Until now, this was working great. Then, this week, the archives that I'm creating on my pc (in Ubuntu 12.04) unzip on the Android phone into a flat hierarchy instead of preserving the folders. I'm creating the archives the same way (right-click on folder compress) but even though my old archives (created in 10.04) still unzip as expected, the new ones don't. On Ubuntu, the new zip files look the same to me as the old ones. When unzipped on my pc the folders in these new archives are restored the same as the old ones... it's the Android app that extracts the old ones fine and the new ones flat. What I really want to know, though, is what the difference between the archives is. Question: How could one determine why one zip archive would be extracted with folder hierarchy preserved, when an identical one (to all appearances on Ubuntu 12.04) is extracted with no hierarchy? Are there different ways in which a .zip file can "have" folders, but Ubuntu doesn't distinguish between them?

    Read the article

  • I need help choosing between two configurations of the Dell Studio 14

    - by Adnan
    There are two configurations of the Dell Studio 14 (1458) which I'm looking at: Config 1: Core i7-720QM @ 1.6 GHz; ATI Mobility Radeon HD 5450 1GB; 4gb DDR3 RAM @ 1066 MHz; 500 GB SATA HDD @ 7200 RPM; Price: $999 Config 2: Core i5-430M; ATI Mobility Radeon HD 4530 512MB; 4GB DDR3 RAM @ 1066 MHz; 500 GV SATA HDD @ 7200 RPM; Price: $874 What I want to know is, would config 1 still be able to do decent gaming (maybe some Starcraft II), and is there a great performance difference between the i5 and i7 processors? Is the $130 extra worth it for the i7 and better graphics card? I do more than just basic computing. I plan on getting into web design (specifically using Photoshop and Dreamweaver), and I wish to do gaming. I know Conifg 1 is the better value, but I want to be sure that the $130 more is truly worth it. I dont have too much money and want to spend wisely as possible, yet I am a computer geek and plan on doing a lot more than the average user.

    Read the article

  • High latency issue for web service call from amazon aws ec2 to local server

    - by SibzTer
    We have a legacy web application that is running in our data center on premises located in Houston. We have a developed a new .net 4 based web application in order to provide new features to customers. The new web application is hosted in amazon aws ec2 environment (N. Virginia region us-east-1b zone). In order to get seamlessly integrate with the legacy application the new web application makes web service calls to retrieve data. We are seeing an unusually high latency time in the order of 5+ seconds for these web service calls. The exact same web service call returns in less than a second on our local PCs (which makes sense given physical proximity to the actual server). The weird part is that we have developers in California who also have the same milliseconds response time. We are testing the web service response using third party tools such as SoapUI, Google Chrome extensions such as Advanced REST Client, Postman REST Client, etc. As if this wasnt weird enough, we have noticed the same low latency from certain other ec2 instances while testing which are in the same region and availability zone as well. If we experienced the high latency consistently from all the ec2 instances I could understand. But there is something else going on. Comparing the various stats and results between the low latency and high latency ec2 servers do not show any significant differences: ping (constant 40ms), tracert, winmtr, etc. We have instances that are in the VPC as well. So I tried both the public and private IP address of the web service host server and that didnt make a difference either for the above results. We need to resolve this latency issue as this is causing the resulting web pages to load very slowly (almost 15+ seconds which is simply unacceptable). The ec2 instances have Windows Server Datacenter 64 bit. Let me know if there is any other infor I can provide to help diagnose this.

    Read the article

  • Sharepoint Workflow "Failed on Start" only when powershell import script is called from task scheduler

    - by Matt Keller
    I created a simple PowerShell script that takes an XML file in a local directory on our sharepoint server and imports it into a specific SharePoint form library. (Content management enabled library if that makes any difference) This script works flawlessly if i run it from the PowerShell command line manually. I call it like such: ".\script_name.ps1". It completes without error and the item is imported into the form library successfully. The workflow begins on the item and everything is happy dandy. However, i run into issues when i setup a scheduled task using Windows Server 2008 R2's task manager. The task runs the script without error and it does actually import the XML into the form library. I looks perfectly normal just as if i had run the script manually. However, after about 10 or 20 minutes the workflow status for that item changes from "In progress" to "Failed on Start (Retrying)". The scheduled task in question is a basic task and has only one action. (Start a program) The "program/script" box is set to "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" and the "Add arguments" box is set to the path of the actual ps1 script. (C:\scripts\sharepoint_import.ps1) I've tried running the task as various users. I've also tried with and without the "Run with highest privileges" check box. Nothing seems to work. For reference, here is the script i am using to import items into the form library.

    Read the article

  • APC not caching many files

    - by tetranz
    Hello I have a Drupal site running on a VPS at Linode with PHP 5.2.10 and APC 3.1.6. It never caches more than about 25 files and barely uses any of its available memory. Drupal has hundreds of php files. I have another server where APC seems to work well and does indeed cache hundreds of files. The only difference with that site is that it runs Ubuntu 10.04 and php 5.3.2. The config settings are the same. What could be wrong? I'll paste the config from apc.php below. This is after hitting multiple parts of Drupal. Thanks APC Version 3.1.6 PHP Version 5.2.10-2ubuntu6.5 APC Host xxx.example.com Server Software Apache/2.2.12 (Ubuntu) Shared Memory 1 Segment(s) with 32.0 MBytes (mmap memory, pthread mutex locking) Start Time 2010/12/02 11:32:17 Uptime 3 minutes File Upload Support 1 File Cache Information Cached Files 21 ( 1.4 MBytes) Hits 169 Misses 21 Request Rate (hits, misses) 1.00 cache requests/second Hit Rate 0.89 cache requests/second Miss Rate 0.11 cache requests/second Insert Rate 0.17 cache requests/second Cache full count 0 User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 1M apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.shm_segments 1 apc.shm_size 32M apc.slam_defense 1 apc.stat 1 apc.stat_ctime 0 apc.ttl 0 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock 1

    Read the article

  • Explorer.exe not starting after login on Windows Server 2003 (Terminal Services and console)

    - by Pepperoni Icecream
    When users login to a Windows Server 2003 R2 running Terminal Services they have a blank desktop. Upon inspection, explorer.exe is not running. When I login as administrator, using either RDP or to the console, I am having the same issue. I can pull up the taskman and start explorer.exe manually. I have another Terminal Server setup exactly the same way (same apps, settings, GPO, etc . . .) the only difference is we deployed Symantec Endpoint Client 11.0.5 on Friday. For some reason the working Terminal Server is still on 11.0.4, but the suspect server received the 11.0.5 client upgrade. I checked the eventviewer for any relevant explorer.exe entries to no avail. It seems that if SEP is preventing explorer.exe from starting at login it would do the same for the domain admin starting explorer.exe from the taskman. I disabled the SEP client and services on the server and issued smc -stop and tried logging in again. Still no explorer.exe. So I'm not sure if the client upgrade is relevant but it is worth mentioning since that was the last system change. The 2 servers are members of a NLB group. I took the bad terminal server out of the group until the issue is resolved. Actually stopped the host using NLB manager Any help is appreciated.

    Read the article

  • Server 2008 NAT Internet Not Working

    - by Jack
    I'm trying to set up Routing and Remote Access on Windows Server 2008 R2, I have a network connection that I want to share the internet from to another private network. The server has two NICs which are configured as follows: External NIC (Dynamically assigned by ISP) IP:10.175.4.150 Subnet:255.255.192.0 Gateway:10.175.0.1 DNS:10.175.0.1 Internal NIC IP:172.16.254.1 Subnet:255.255.255.0 Gateway:None DNS:None I have set the external NIC to be the public interface and enabled NAT on it in the RRAS MMC and set the internal NIC to be a private interface. I have also set up the DNS forwarding or whatever it is in the NAT section. From a client (IP:172.16.254.2) I can ping the server and access files on it, when I try to browse the web with the default gateway set to the internal NIC ip I end up getting a 404 page which is returned from the ISPs default gateway. I'm guessing it's something to do with the double NAT possibly. Trying to ping the ISPs default gateway from a private network client just times out as does accessing it directly. I've disabled and reconfigured RRAS multiple times and that doesn't seem to have made a difference, so can anyone tell me what I'm doing wrong? Thanks.

    Read the article

  • Ruby Passeger + Nginx or lighthttpi + fgci for shared hosting

    - by devnull
    I have set up a passenger + nginx setup and I plan to offer a free non-commercial hosting (or in fact on the fly deployment) for rack-based frameworks (e.g. camping, sinatra). I am facing an "issue" with passenger. For each application you need to configure nginx.conf (it would be the same with apache so it is not an nginx issue) with: server { ... passenger_base_uri /app1; passenger_base_uri /app2; passenger_base_uri /app3; } Now this is not inherently bad as, in theory, I could allow a user to run just one app on his webspace but even in this case I need to create a new server directory on nginx e.g. (user.domain.com). As this will mainly be used to deploy apps the behavior I am looking at is more the possibility to auto map several apps (e.g. app1, app2, app3, app4) under the same server (your app.com/app1 yourapp.com/app2) without having to update the nginx or apache file each time. This seems to be a limitation in passenger. As such I am thinking about an alternative with lighttpd and fastcgi. Would this allow immediate deployment without touching the lighttpd config file e.g. I create a new directory with app2 and it will run immediately ? What is your experience in performance difference between passenger + nginx vs. lighttpd + fastcgi ? thanks in advance scenario details: on nginx + passenger - user cannot add a new sub-folder and run another sinatra/camping app without declaring the path on nginx.conf and restarting the server; wished behavior with the new setup: - user can add a new folder with a new app and it would run on lighttpd+fcgi without any extra configuration of the web server;

    Read the article

  • Internet slowed down because of SQUID Server setup

    - by Ranjith Kumar
    Recently I have setup a squid server for our office. I have computer (A) with two ethernet cards, one for internet and the second one for local networkIt has Ubuntu server OS with squid-server and dhcp3-server installedI have added few iptable rules to work like a router and redirect all http traffic to 3128 port This link is my reference. Everything worked fine for 2 days. All of a sudden internet speed went down drastically. When I connected the internet cable to my laptop to test the internet speed it was fine. Again when I reconnected it back to computer A everything was normal. This happened 4 times in a week. Could anyone here please help me why the internet speed is going down and it becomes normal when I reconnect the cable. EDIT: Rebooting the system (computer A) didn't make a difference. I have changed iptables so that http traffic doesn't redirect to 3128 port any further, still no change in the internet speed. I think the problem is not with squid but with something else. Here are my iptable rules SQUID_SERVER="10.1.1.1" INTERNET="eth1" LAN_IN="eth0" SQUID_PORT="3128" PROXYSERVERS=(Atlanta Baltimore Boston Chicago Dallas Denver Houston KansasCity LosAngeles Miami NewYork Philadelphia Phoenix SanAntonio SanDiego SanJose Seattle Washington) SERVERLEN=${#PROXYSERVERS[*]} I=0 iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X modprobe ip_conntrack modprobe ip_conntrack_ftp echo 1 /proc/sys/net/ipv4/ip_forward iptables -P INPUT DROP iptables -P OUTPUT ACCEPT iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT iptables -A INPUT -i $INTERNET -m state --state ESTABLISHED,RELATED -j ACCEPT iptables --table nat --append POSTROUTING --out-interface $INTERNET -j MASQUERADE iptables --append FORWARD --in-interface $LAN_IN -j ACCEPT iptables -A INPUT -i $LAN_IN -j ACCEPT iptables -A OUTPUT -o $LAN_IN -j ACCEPT while [ $I -lt $SERVERLEN ]; do iptables -t nat -A PREROUTING -i $LAN_IN -p tcp -d ${PROXYSERVERS[$I]}.wonderproxy.com --dport 80 -j ACCEPT let I++ done iptables -t nat -A PREROUTING -i $LAN_IN -p tcp --dport 80 -j DNAT --to $SQUID_SERVER:$SQUID_PORT iptables -A INPUT --protocol tcp --dport 80 -j ACCEPT iptables -A INPUT --protocol tcp --dport 443 -j ACCEPT iptables -A INPUT --protocol tcp --dport 22 -j ACCEPT iptables -A INPUT -j LOG iptables -A INPUT -j DROP

    Read the article

  • Easy GUI way to auto scale EC2 and RDS: aws console, scalr, ylastic...?

    - by Zillo
    I am managing all my instances with the AWS Management Console (the GUI web console) but now I want to use Auto Scale and it seems that this can not be done with that console. Yes, there is CloudWatch but I can only create alarms (e-mail notifications), it seems that CouldWatch needs you to add the auto scale policy in some other place (by command line console?). I would like to use some easy GUI interface. Ylastic and Scalr seems to be a good option. Which one do you think is better? Regarding Scalr, is there any difference between the open source software Scalr and the service Scalr.net? I mean, is the GUI interface the same? I like the idea of the Scalr because I do not need to give my Secret Access Key to a third party (like in Ylastic or in Scalr.net) One question about the Scalr software, it has to be installed in the instances or it must be installed in another machine? Do I need to setup again all my security permissions, AMIs, snapshots, etc. or I can use AWS Management Console for everything and Scalr just to auto scale.

    Read the article

  • Powershell Copy-Item fails silently

    - by R W
    I have a powershell 2.0 script running on Windows Server 2008 R2 64bit that copies some Hyper-V .vhd files to another server as a 'backup solution'. The script gets a list of the .vhd's to copy then iterates over that list to copy them using Copy-Item. It also writes some logging info to a file as well. The files are copied to another server (Windows Server 2003 Sp2) into a directory compressed with NTFS compression. One of the files isn't copied. It's relatively big ~ 68Gb. The others are 20Gb or less. The wierd thing is that during the copy process the file appears on the destination server and the log file generated seems to indicate the file is copied due to the difference in the times of the log file entries. I see no error messages on the log file and nothing in the event log of either machine. Here's the code that does the copy. Get-ChildItem $VMSource *.vhd -Recurse | foreach-object { $time = Get-Date -format HH.mm.ss Add-Content $logFileName "$time : File Copy ($_) started" $fullname = $_.FullName Add-Content $logFileName "$time : Copying $fullname to $VMDestination" Copy-Item $fullname $VMDestination -Force -ErrorAction SilentlyContinue -ErrorVariable errors foreach($error in $errors) { if ($error.Exception -ne $null) { Add-Content $logFileName "'tERROR COPYING FILE : $($error.Exception)" } } $time = Get-Date -format HH.mm.ss Add-Content $logFileName "$time : File Copy ($_) finished" } I can only think there's some problem with copying a file that big to a compressed directory maybe? Any ideas?

    Read the article

  • How can I find the USB wireless adapter into the dmesg log file?

    - by AndreaNobili
    I am pretty new in Linux (RaspBian for RaspBerry Pi but I think that there are not difference) and I have to install an USB wireless network adapter (the product is the TP-Link TL-WN725N, this one: http://www.tp-link.it/products/details/?model=TL-WN725N ) Now, I think that this is not automatically recognized by my system because if I execute ifconfig command I obtain the following output: pi@raspberrypi ~ $ ifconfig eth0 Link encap:Ethernet HWaddr b8:27:eb:2a:9f:b0 inet addr:192.168.1.8 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:475 errors:0 dropped:0 overruns:0 frame:0 TX packets:424 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:34195 (33.3 KiB) TX bytes:89578 (87.4 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) So now it see only my ethernet network interface and not the wireless. So I was thinkig to try to see into the dmesg, but I don't know what have I to see and how to select it into the dmesg output. For example by the following command I can see the line of the dmesg log file relate to my ethernet port: pi@raspberrypi ~ $ cat /var/log/dmesg |grep -i eth [ 3.177620] smsc95xx 1-1.1:1.0 eth0: register 'smsc95xx' at usb-bcm2708_usb-1.1, smsc95xx USB 2.0 Ethernet, b8:27:eb:2a:9f:b0 [ 18.030389] smsc95xx 1-1.1:1.0 eth0: hardware isn't capable of remote wakeup [ 19.642167] smsc95xx 1-1.1:1.0 eth0: link up, 100Mbps, full-duplex, lpa 0x45E1 But what can I try to search for the USB wireless adapter? Tnx

    Read the article

  • Getting Windows (VMware) to load from OSX's localhost without an Internet Connection

    - by Jonah Goldstein
    I'm using MAMP to host my local sites, and VirtualHostX so that I can access sites during local development via a convenient URL like mysite.dev I'm also running Windows XP via VirtualBox, and it would be great to be able to load up any of my local sites within windows while offline as currently often working without access, on the move, unfortunately. I know that I can append my IP and a nice domain name to the host file in C:/WINDOWS/system32/drivers/etc ... and i can find my IP simply through terminal with "ifconfig" while I'm online. The problem is that when I'm not online, there's no IP. Even if there is an IP (when i have a connection), I still have grab it and update the windows hosts' file all the time, since I'm developing from a laptop and have a new IP at the drop of a dime. I found a tutorial where the author is able to get a permanent IP. He uses VMware Fusion as his VMachine, which is the only difference between his setup and mine. By running the terminal command "ifconfig vmnet1" he gets: a secret IP the virtual machine uses to talk to OSX And that doesn't change - which is awesome. I'm assuming it exists even if he's offline. His tutorial is here, http://bit.ly/U2lq It would be pretty fantabulous if I could replicate this with virtualBox. Anyone have ideas? Thanks:)

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >