Search Results

Search found 117786 results on 4712 pages for 'one two three'.

Page 86/4712 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • DNS failover in a two datacenter scenario

    - by wanson
    I'm trying to implement a low-cost solution for website high availability. I'm looking for the downsides of the following scenario: I have two servers with the same configuration, content, mysql replication (dual-master). They are in different datacenters - let's call them serverA and serverB. Users use serverA - serverB is more like a backup. Now, I want to use DNS failover, to switch users from serverA to serverB when serverA goes down. My idea is that I setup DNS servers (bind/powerdns) on serverA and serverB - let's call them ns1.website.com and ns2.website.com (assuming I own website.com). Then I configure my domain to use them as its nameservers. Both DNS servers will return serverA IP as my website's IP. If serverA goes down I can (either manually or automatically from serverB) change configuration of serverB's DNS, to return IP of serverB as website's IP. Of course the TTL will be low, as it's supposed to be in DNS failovers. I know that it may take some time to switch to serverB (DNS ttl, time to detect serverA failure, serverB DNS reconfiguration etc), and that some small part of users won't use serverB anyway. And I'm OK with that. But what are other downsides of such an approach? An alternative scenario is that ns1.website.com will return serverA IP as website's IP, and ns2.website.com will return serverB IP as website's IP. But AFAIK clients not always use primary nameserver and sometimes would use secondary one. So some small part of users would use serverB instead of serverA which is not quite what I'd like. Can you confirm that DNS clients behave like that and can you tell what percentage of clients would possibly use serverB instead of serverA (statistically)? This one also has the downside that when serverA goes back up, it will be automatically used as website's primary server, which is also a bad situation (cold cache, mysql replication could fail in the meantime etc). So I'm adding it only as a theoretical alternative. I was thinking about using some professional DNS failover companies but they charge for the number of DNS requests and the fees are very high (why?)

    Read the article

  • One server with multiple desktops "heads" with VNC

    - by Alexis K
    I am managing a system of kiosks. Each kiosk currently is running a web browser with the application for the kiosk running in the browser. Each kiosk needs to be able to display separate content. At times, the application running in the web browser freezes. Thus, I have to go out to the site to refresh the page. I want to see if there is a way to have one central server that has multiple browser heads. Then each kiosk would run a program like VNC to display one of the heads. This way when the program freezes, I just have to login to the central server and refresh the page. Getting VNC or another remote desktop software installed on the clients is no problem. What I am looking for is a way to have VNC remote into a specific head of a head of a web browser. Does such a thing exist? Or do I have to run a VM for each kiosk to remote into? Any advice, pointers, or solutions would be helpful.

    Read the article

  • Have it fixed or buy a new one?

    - by Workshop Alex
    My dual-monitor system has just become a single-monitor system again when the older monitor decided it would be nice to just turn to black. It's a Samsung LCD monitor and is over three years old. Not sure if the warranty is still valid but I just wonder what option would me more efficient: 1) Have the monitor fixed for a small amount. 2) Buy a new monitor for a slightly bigger amount. When monitors were still expensive, I wouldn't doubt about this and would just have my monitor repaired. But prices are so low nowadays, (and repairs are expensive) that I wonder if it's worth the trouble... Of course, I'm in no hurry since I still have another monitor. It's just that I liked the dual-monitor setup. Solved! Just ordered a new monitor. A Samsumg Syncmaster T260HD 25,5". Much more than it would cost me if I just had my old one repaired but I noticed that this one has a build-in TV tuner, plus speakers. It's way more expensive than a repair, but it's worth the additional value it provides.

    Read the article

  • Postfix relay all mail through SES except for one sending domain / address

    - by Kevin
    I'm thinking this is really really super simple, but I can't figure out what I need to do. I don't mess with Postfix much (Just let it run and do its thing) so I've got no idea where to even start with this. We have postfix currently configured to relay all mail out through SES using the code below. We need to modify this so that emails sent from one of our domains (domain.com) DO NOT go through SES. Everything else should continue to flow out through the SES connection. I'm assuming this is like a one line thing but my google skills are not helping me at all. relayhost = email-smtp.us-east-1.amazonaws.com:25 smtp_sasl_auth_enable = yes smtp_sasl_security_options = noanonymous smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_use_tls = yes smtp_tls_security_level = encrypt smtp_tls_note_starttls_offer = yes smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt smtp_destination_concurrency_limit = 450 Update I have created sender_transport file in /etc/postfix. In it is @domain.com smtp: I then ran this through postmap and placed sender_dependent_default_transport_maps = hash:/etc/postfix/sender_transport above the above block of code and restarted postfix, but still all email is going out through SES. Log after sending Oct 22 14:38:48 web postfix/smtp[19446]: 4B19D640002: to=<[email protected]>, relay=email-smtp.us-east-1.amazonaws.com[54.243.47.187]:25, delay=1.4, delays=0.01/0/0.92/0.44, dsn=2.0.0, status=sent (250 Ok 00000141e21b181f-ee6f7c4f-f0f5-4b0f-ba69-2db146a4f988-000000) Oct 22 14:38:48 web postfix/qmgr[19435]: 4B19D640002: removed I don't think this log is what you're looking for, but it's the only thing that is logged when mail goes out, and this is with me running /usr/sbin/postfix -v start manually and not with the init script.

    Read the article

  • Two different subwoofers aren't working on my machine or my phone

    - by Philluminati
    I have speakers that come with my computer. Two small desktop speakers and a subwoofer with a base volume control on the back. It's worked for years. I was listening to Spotify on my speakers as loud they would possibly go and with the base turned up to max and suddenly the subwoofer stopped working. I've plugged the speakers into my Android HTC Desire Z handset and again, the desktop speakers play music but the subwoofer doesn't (even after fiddling with the volume control). So I figured I'd broken it. I went to Amazon and bought a replacement one. I bought this one: http://www.amazon.co.uk/dp/B002N46YD8/ref=pe_217191_31005151_dp_1 but it doesn't work either, on either my desktop nor my Android phone. I had a play with alsamixer and the LFE and center controls are switched on and the speakers are okay... but still no base. Am I unlucky enough to bought a new subwoofer which is already broken out of the box or is there something else which is wrong and I could look into please? Are there any other tests which I could perform to see if the problem is me or not?

    Read the article

  • Debian, 2 NICs load-balancing or agregating with one same gateway

    - by pouney
    Hi, I have one server, with double NICs connected to one switch with the same gateway. Behind the switch we have internet. |Debian| - eth0 - switch - internet - eth1 - same I don't understand how to load-balancing between eth0 and eth1. The inbound/outbound traffic always use eth1. This is the config: # The primary network interface allow-hotplug eth0 auto eth0 iface eth0 inet static address 192.168.248.82 netmask 255.255.255.240 network 192.168.248.80 broadcast 192.168.248.95 gateway 192.168.248.81 allow-hotplug eth1 auto eth1 iface eth1 inet static address 192.168.248.83 netmask 255.255.255.240 network 192.168.248.80 broadcast 192.168.248.95 gateway 192.168.248.81 Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.248.80 0.0.0.0 255.255.255.240 U 0 0 0 eth1 192.168.248.80 0.0.0.0 255.255.255.240 U 0 0 0 eth0 0.0.0.0 192.168.248.81 0.0.0.0 UG 0 0 0 eth1 0.0.0.0 192.168.248.81 0.0.0.0 UG 0 0 0 eth0 Ips aren't real, it's just for the example. Anybody have an idea on correct routing to use eth0 on 192.168.248.82 and eth1 on 192.168.248.83 ? I have many example for multiple gateway but here it's the same. Thanks all. Regards

    Read the article

  • Creating multiple SFTP users for one account

    - by Tom Marthenal
    I'm in the process of migrating an aging shared-hosting system to more modern technologies. Right now, plain old insecure FTP is the only way for customers to access their files. I plan on replacing this with SFTP, but I need a way to create multiple SFTP users that correspond to one UNIX account. A customer has one account on the machine (e.g. customer) with a home directory like /home/customer/. Our clients are used to being able to create an arbitrary number of FTP accounts for their domains (to give out to different people). We need the same capability with SFTP. My first thought is to use SSH keys and just add each new "user" to authorized_keys, but this is confusing for our customers, many of whom are not technically-inclined and would prefer to stick with passwords. SSH is not an issue, only SFTP is available. How can we create multiple SFTP accounts (customer, customer_developer1, customer_developer2, etc.) that all function as equivalents and don't interfere with file permissions (ideally, all files should retain customer as their owner)? My initial thought was some kind of PAM module, but I don't have a clear idea of how to accomplish this within our constraints. We are open to using an alternative SSH daemon if OpenSSH isn't suitable for our situation; again, it needs to support only SFTP and not SSH. Currently our SSH configuration has this appended to it in order to jail the users in their own directories: # all customers have group 'customer' Match group customer ChrootDirectory /home/%u # jail in home directories AllowTcpForwarding no X11Forwarding no ForceCommand internal-sftp # force SFTP PasswordAuthentication yes # for non-customer accounts we use keys instead Our servers are running Ubuntu 12.04 LTS.

    Read the article

  • Synchronize two directories on linux pc

    - by Gab
    I need a distributed filesystem (or a synchronization tool) that is capable of keeping a directory synchronized across 4 pc. My requirements are: offline access (data must be available offline on each pc) preserve execution rights: some files are marked executable on a linux partition. This flag should be replicated. efficient sync strategy: some of my files are 20GB, they are changed quite often, but in very little parts (Virtualbox images). Delta transmissions are welcome. efficient handling of space: no history for files, files shouldn't be copied to temp directories "just in case you break it". it must propagate deletions of files modification can happen in any of the 4 pcs, they should be propagated when other pc are connected. Other specs of my solution are: Sync is over a lan, the total amount of data to be synced is around 180GB, in some ten thousand files. Changes are small, but can happen in big files. At the moment i'm interested in a linux only solution. conflicts either don't happen or are solved with "last one wins" I haven't found any good solution. I've been trying: unison: it is the only one working at the moment, but during the hashing phase it hangs my pc for some minute, disk light steady on. Sparkleshare doesn't handle large files nicely. It keeps an history of all your changes that grows up indefinitely. They promise it will be fixed in next releases, but at the moment it still doesn't fit my needs. owncloud (keeps history of each file i change) coda ? (help! i couldn't set it up correctly!) git-annex assistant transforms all your files in symlinks and mark the original file as read only ("just in case you make a mistake while you modify it"!). Before you edit a file you have to issue a special command "git-annex unlock", that creates a local copy of the file, and you have to remember to lock it again if you want it synchronized. What to try next?

    Read the article

  • Find slow network nodes between two data centers

    - by 2called-chaos
    I've got a problem with syncing big amount of data between two data centers. Both machines have got a gigabit connection and are not fully occupied but the fastest that I am able to get is something between 6 and 10 Mbit = not acceptable! Yesterday I made some traceroute which indicates huge load on a LEVEL3 router but the problem exists for weeks now and the high response time is gone (20ms instead of 300ms). How can I trace this to find the actual slow node? Thought about a traceroute with bigger packages but will this work? In addition this problem might not be related to one of our servers as there are much higher transmission rates to other servers or clients. Actually office = server is faster than server <= server! Any idea is appreciated ;) Update We actually use rsync over ssh to copy the files. As encryption tends to have more bottlenecks I tried a HTTP request but unfortunately it is just as slow. We have a SLA with one of the data centers. They said they already tried to change the routing because they say this is related to a cheap network where the traffic gets routed through. It is true that it will route through a "cheapnet" but only the other way around. Our direction goes through LEVEL3 and the other way goes through lambdanet (which they said is not a good network). If I got it right (I'm a network intermediate) they simulated a longer path to force routing through LEVEL3 and they announce LEVEL3 in the AS path. I basically want to know if they're right or they're just trying to abdicate their responsibility. The thing is that the problem exists in both directions (while different routes), so I think it is in the responsibility of our hoster. And honestly, I don't believe that there is a DC2DC connection which only can handle 600kb/s - 1,5 MB/s for weeks! The question is how to detect WHERE this bottleneck is

    Read the article

  • What is the alternative of Apache's global Alias in IIS instead of adding a Virtual Directory to every single sites one by one?

    - by Sk8erPeter
    In Apache, there's a way I can make phpMyAdmin available globally to all VirtualHosts I set up. In Apache, it looks like this: <IfModule mod_alias.c> Alias /phpmyadmin "c:/AppServ/www/phpMyAdmin" </IfModule> This way I reach phpmyadmin with prepending /phpmyadmin to all my domain names, and I can see phpmyadmin's initial page. (So for example it works for all my domains like this: http://example_1.com/phpmyadmin, http://example_2.com/phpmyadmin, http://example_3.com/phpmyadmin also does work). In IIS, there's an "Add Virtual Directory..." option when right clicking on a given site. Here I can set up e.g. phpMyAdmin's path to be reached with prepending /phpmyadmin to the given domain (e.g. http://example_1.com/phpmyadmin), but isn't there a "global" setting similar to Apache's Alias? Or do I have to add a virtual directory to every given sites one by one? I'm just curious, it's not a hard work to do it, but I'm interested in it if there exists another method to do it. Thanks in advance!

    Read the article

  • UW-IMAP server, high load for one user

    - by Bruce Garlock
    We have been experiencing a very strange anomaly, with one specific user with our UW-IMAP server. We have about 75 users using the server, and one particular user, who is in about the middle as far as used storage keeps having issues with slow speed. Most of our users all use Thunderbird 2, or Thunderbird 3. Mostly 2, because of the performance issues we have had with 3. This user was on 3, and I downgraded him to 2. The performance has gotten better, but according to the imapd processes on the server, his username is using the most CPU % and CPU time. I've already done all the usual T/S'ing: Started profile from scratch, compacted folders, re-indexed, newer faster computer, etc.. Still, this users' imapd process is always using the most CPU on the server. For troubleshooting, we setup another user which has more usage, folders, etc.. than he does, but we don't see the users process taking up most of the CPU with the imapd process. So, it almost sounds like a particular email may be the culprit, but how can we find it, if thats the problem? This has been going on for a while, and he is a management person, so his patience is about to end. Does anyone have any ideas?

    Read the article

  • Check for unique rows, but ignore one particular column

    - by user269148
    I have an XML document, that looks like this: Column A to S with headers, and there are 1922 rows. This is an backup of some SMS, and I want to get rid of duplicates. The problem is, that the Time in the readable_date header has been messed up. There is nothing wrong with the date, but the clock time is wrong, so I have split that column in three, with Year, day and clock. I know I can use a standard filter, but it only looks for unique rows in a single column. What I want to perform, is to make a row check similar to this: F(x)=Check if Column 2A to (infinate) is equal to Column 3A to (infinate), but ignore column(R). IF True, then delete Column 3A to (infinate) Otherwise Check IF column 2A to (infinate) is equal Column 4A to (infinate) and so on. I need to ignore a particular column in a row every time, and need to do this for a complete sheet. And the formula check should apply for every row, when the first one is done checking for duplicates... If anyone else has a better solution, please say so. Anyway, anyone who can help?

    Read the article

  • logrotate deletes all maillogs older than one day

    - by shadyabhi
    I see only two files maillog and maillog.1 in /var/log. grepping for maillog in logrotate.d directory gives three files that have a mention of maillog. syslog /var/log/messages /var/log/secure /var/log/maillog /var/log/spooler /var/log/boot.log /var/log/cron { #/var/log/messages /var/log/secure /var/log/spooler /var/log/boot.log /var/log/cron { daily sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true endscript } syslog-ng /var/log/messages /var/log/secure /var/log/maillog /var/log/spooler /var/log/boot.log /var/log/cron /var/log/kern.log /var/log/kern { sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true endscript } and maillog. /var/log/maillog { daily compress # rotate 365 rotate 14 sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true endscript } I am new to logrotate so may be I am missing something obvious. What can be the issue? The setup was already done when I started managing the server so I don't also know as do why do I have 3 mentions for maillog in logrotate.

    Read the article

  • Two monitors with Mac Mini - one displays black despite receiving a signal

    - by alex
    My Mac Mini outputs to my two new monitors - Dell U2311Hs. The LED on the bezel displays blue when receiving a signal, or yellow otherwise. Both screens are displaying blue. It also seems my Mini can see both of them... However, one of them is black. It just displays black, but appears to be receiving a signal (when I turn the Mac off, it then displays No Signal). To make things weirder, on startup, the boot up (white with Apple logo) appears on the right monitor (the one that now displays black). Occasionally, it flickers up on the black screen for 1 second. I have tried Detect Displays. It appears to do nothing. I'm also running a dual monitor KVM. Video connections are DVI-D. How can I fix this situation? Thanks. Update This is the weirdest thing - I used the DVI-D cable that came with the KVM and it seems to have fixed it - I didn't both because it looks identical to any other DVI cable (in form an pin out). So, I will accept an answer if someone can tell me what may be the difference in these cables?

    Read the article

  • Combine multiple rows into one

    - by Jim
    I am trying to combine multiple rows of data into one. Column A contains the value on which the groupings will be based -- rows whose Column A values match will be combined into one row. My range extends from column A through X so I need a matching row of data to start in column Y. Example: +--------------+ ¦ 1001 ¦ A ¦ C ¦ ¦ 1001 ¦ B ¦ D ¦ ¦ 1002 ¦ A ¦ E ¦ ¦ 1002 ¦ B ¦ F ¦ ¦ 1002 ¦ C ¦ G ¦ +--------------+ Desired Result: +------------------------------+ ¦ 1001 ¦ A ¦ C ¦ B ¦ D ¦ ¦ ¦ ¦ 1002 ¦ A ¦ E ¦ B ¦ F ¦ C ¦ G ¦ +------------------------------+ The VBA code I am currently using is not taking the entire contents of the matched row. It is only taking the data in the 2nd column and moving it up. VBA Code: Sub Mergeitems() Dim cl As Range Dim rw As Range Set rw = ActiveCell Do While rw <> "" ' for each row in data set ' find first empty cell on row Set cl = rw.Offset(0, 1) Do While cl <> "" Set cl = cl.Offset(0, 1) Loop ' if next row needs to be processed... Do While rw = rw.Offset(1, 0) cl = rw.Offset(1, 1) ' move the data Set cl = cl.Offset(0, 1) ' update pointer to next blank cell rw.Offset(1, 0).EntireRow.Delete xlShiftUp ' delete old data Loop ' next row Set rw = rw.Offset(1, 0) Loop End Sub

    Read the article

  • Restrict access to one SVN repository (overwrite default)

    - by teel
    I'm trying to set up our SVN server so that by default the group developers will have access to all repositories, but I want to override that setting on some certain repositories where I want to allow access only to single defined users (or separate groups) The current configuration is SVN + WebDAV on Apache2. All my repositories are located at /var/lib/svn/ In dav_svn.authz I currently have [/] @developers = rw @users = r Now I want to add one repository (let's call it secret_repo) that would only allow access to one user who is also a member of the developers group.¨ I tried to do [secret_repo:/] * = secret_user = rw Where secret_user is the user I'd like to give access to the repository, but it doesn't seem to work. Currently the server is using Apache's LDAP module to authenticate users from our active directory domain and I'd like to keep it that way if possible. Also I seem to be able to browse all my repos freely with any web browser, which I'd like to block. Second problem is that I have webSVN on the server, which is using Apache's LDAP authentication. Everyone who is a member of our domain can access it, so I'd like to hide this secret_repo from websvn listing. It's configured not with parentPath("/var/lib/svn");. Do I really need to remove that and add every repository separately, except the ones I want to hide?

    Read the article

  • Network topology for both direct and routed traffic between two nodes

    - by IndigoFire
    Despite it's small size, this is the most difficult network design problem I've faced. There are three nodes in this network: PC running Windows XP with an internal WiFi adapter.Base station with both WiFi and a Wireless Modem (WiModem)Mobile device with both WiFi and WiModem The modem is a low-bandwidth but high-reliability connection. We'd like to use WiFi for high-bandwidth stuff like file transfers when the mobile is nearby, and the modem for control information. Here's the tricky part: we'd like the wifi traffic to go directly from the mobile to the PC, as rebroadcasting packets on the same WiFi channel takes up double the bandwidth. We can do that with a manual configuration by giving the both the PC and the base station two IP addresses for their WiFi interfaces: one on a subnet shared with the mobile, and one on their own subnet. The routes on the PC are set up so that any traffic going to the mobile via WiModem goes through the secondary IP address so that return traffic from the mobile also goes through the WiModem. Here's what that looks like: PC WiFi 1: 192.168.2.10/24 WiFi 2: 192.168.3.10/24 Default route: 192.168.2.1 Base Station WiFi 1: 192.168.2.1/24 WiFi 2: 192.168.3.1/24 WiModem: 192.168.4.1/24 Mobile WiFi: 192.168.3.20/24 WiModem: 192.168.4.20/24 We'd like to move to having the base station automatically configure the mobile and PC, as the manual setup is problematic when you start having multiple mobiles and PCs. This means that the PC can only have 1 IP address and needs to be treated as being pretty simple. Is it possible to have a setup driven by DHCP on the base station that is efficient with bandwidth?

    Read the article

  • DNS manager in Windows Server 2012 Essentials - My one server appears twice

    - by tetranz
    I have a newly installed Windows Server 2012 Essentials. It works pretty good although I'm working on some DNS improvements. Something that seems a little weird is in DNS Manager, my server appears twice. Once as hostname and once as hostname.mydomain.local. They seem to be identical and locked in sync. If I change one, the other follows. Is this normal? Does anyone know why I have this? I'm talking about the top level on the navigation. The very top is DNS and then these two below. Zones, forwarders etc are below them. I've found a couple of forum posts of people asking the same thing but no useful answer. All tutorials etc I can find with screenshots show only one which makes me uncomfortable. The server was installed out of the box as standard with the wizards. I know about the recommendation not to use .local but the wizards didn't give me any other option.

    Read the article

  • Running WAMP (XAMPP) and LAMP from One SSD, On 64-bit Windows and Linux Machines

    - by nicorellius
    I have an solid state drive that I develop websites on. The reason I do this is because I work on a few different computers. Historically, I created separate developing environments to use for each machine. This was OK, but if the system changed for some reason, eg, new OS install, it was a pain. So I bought a USB 3.0 enclosure and put a solid state drive in there and it's pretty darn fast, which is good. I was working with three Windows machines and I could simply hook up the drive, launch my XAMPP server and away I went, developing websites: using Dreamweaver, Komodo, Notepad++, Eclipse, etc. Recently, however, one of my Windows machines' hard drive went down and instead of going back to Windows in this case, I went with Ububntu 12.04. I have several Ubuntu workstations and servers and I like Linux, so I thought his was a great opportunity to transition. I went to work installing and trying to set up a LAMP server and, besides from XAMPP 64-bit compatibility out of the box, I'm seeing other issues with getting this Linux server running. I will keep trying to resolve this, but in the meantime... my question is, has anyone ever successfully run both WAMP and LAMP from the same SSD (formatted to NTFS)? I'm sure there are lots of barriers to this happening, like local file system, OS libraries, dependencies, etc. But I was thinking it would be cool if it could be done. I'm no expert, so if this is just plain old stupid, please don't hesitate to let me know.

    Read the article

  • WINDOWS 7: Make the contents of two folders appear in one

    - by big_smile
    In Windows 7, I have three folders: "Images", "assets" and "all". I want the contents of "Images " and "assets" to appear in "all" automatically without copying those files into that folder (e.g. I don't want to duplicate the files). I also only want the contents to be copied over and not the folders themslves (The reason for this is that if the folders are copied over, they will become sub-directories. I am using a printing hot folder that access "all" but it can't see any subdirectories in "all"). When Images and Assets are updated (e.g. with files being added or deleted), "all" should automatically update as well. How can I do this? This is what I have tried: Libraries: This is a feature built into Windows. It works exactly as I want. However, the print hot folder cannot recognise the library as a folder. Sym Link Extension: I can use this to make the "images" and "assets" folders appear as a sub directories of "all". However, I want the contents of "images"/"assets" to appear in the "all" folder (I don't want the directories to appear as sub directories, because as stated, the print hot folder cannot access sub directories).

    Read the article

  • Can I host multiple sites with one Amazon EC2 instance [duplicate]

    - by user22
    This question already has an answer here: Can you help me with my capacity planning? 2 answers I currently have VPS server and I pay around $75 per month and I get: 40GB HD 2Gb RAM 100GB BW 6 core cpu (but i dont use much) I have only one live website running and traffic is only max 100 user visit per day. I mostly do the my testing stuff and some of my inter sites for playing with coding. But I do need one server. I am thinking of moving to Amazon EC2 if the price diff is not so much because then I can learn some more stuff. I am thinking of getting the 3 years Heavy utilization Reserved instance because my server will be running all day and night. I tried their online caluclator with Medium Instance Heavy reserved for 3 years for EC2 it comes $31 per month(effective price) and for EBS and S3 , I think even if thats it $40 for all other stuff. I will be at no loss for what I am getting at present. Am i correct or I missed something?? Now In my current VPS I have Apache for PHP sites and MOD wsgi for python sites. I am not sure if I will be able to do all that stuff in Amazon EC2. Can I host python and PHP sites both in Amazon EC2 instance using Named Virtual Hosts and Ngnix

    Read the article

  • some HTTPS sites getting blocked on one machine in network

    - by shadowfoxmi
    I have a few computers connected to the internet via a router. I have been having some trouble with this one Windows 7 desktop. I can browse most of the sites without any trouble but some sites where the sign in page switches to a secure connection (https), the page does not load. It's not all of the sites though. I'm able to sign into gmail and a few other services that I know use https . The sites I'm having trouble with; yahoo's sign in page and the one that I have been using to test across different systems, http://iforgot.apple.com (which switchs to https) ;this particular site, i can access from other computers on the network and my phone. I only have windows firewall running and AVG. I even tried to stopping windows firewall but it did not help. Everything was fine last week. All I have installed in the past week is VOIP softwares namely skype, ooVoo and windows live messenger. I'm not sure how to find out what's being blocked and why and how to unblock it? Any suggestions would be greatly appreciated.

    Read the article

  • Calculating rotation and translation matrices between two odometry positions for monocular linear triangulation

    - by user1298891
    Recently I've been trying to implement a system to identify and triangulate the 3D position of an object in a robotic system. The general outline of the process goes as follows: Identify the object using SURF matching, from a set of "training" images to the actual live feed from the camera Move/rotate the robot a certain amount Identify the object using SURF again in this new view Now I have: a set of corresponding 2D points (same object from the two different views), two odometry locations (position + orientation), and camera intrinsics (focal length, principal point, etc.) since it's been calibrated beforehand, so I should be able to create the 2 projection matrices and triangulate using a basic linear triangulation method as in Hartley & Zissermann's book Multiple View Geometry, pg. 312. Solve the AX = 0 equation for each of the corresponding 2D points, then take the average In practice, the triangulation only works when there's almost no change in rotation; if the robot even rotates a slight bit while moving (due to e.g. wheel slippage) then the estimate is way off. This also applies for simulation. Since I can only post two hyperlinks, here's a link to a page with images from the simulation (on the map, the red square is simulated robot position and orientation, and the yellow square is estimated position of the object using linear triangulation.) So you can see that the estimate is thrown way off even by a little rotation, as in Position 2 on that page (that was 15 degrees; if I rotate it any more then the estimate is completely off the map), even in a simulated environment where a perfect calibration matrix is known. In a real environment when I actually move around with the robot, it's worse. There aren't any problems with obtaining point correspondences, nor with actually solving the AX = 0 equation once I compute the A matrix, so I figure it probably has to do with how I'm setting up the two camera projection matrices, specifically how I'm calculating the translation and rotation matrices from the position/orientation info I have relative to the world frame. How I'm doing that right now is: Rotation matrix is composed by creating a 1x3 matrix [0, (change in orientation angle), 0] and then converting that to a 3x3 one using OpenCV's Rodrigues function Translation matrix is composed by rotating the two points (start angle) degrees and then subtracting the final position from the initial position, in order to get the robot's straight and lateral movement relative to its starting orientation Which results in the first projection matrix being K [I | 0] and the second being K [R | T], with R and T calculated as described above. Is there anything I'm doing really wrong here? Or could it possibly be some other problem? Any help would be greatly appreciated.

    Read the article

  • Swapping two jQuery draggable list items not working properly (with jsFiddle example)

    - by Tony_Henrich
    The minimalist working example below swaps the two boxes when box 'one' is dragged and dropped on box 'two'. The problem is that when box 'one' is dropped, its style has 'top' & 'left' values causing it to be placed away from where it should drop. Its class includes 'ui-draggable-dragging'. It seems the top & left values are related to the amount the elements were dragged before the drop. And the dragging was 'interrupted' hence the residual 'ui-draggable-dragging' class? What am I missing to make the swap work seamlessly? full jsfiddle example here <html> <head> <script type="text/javascript" src="includes/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="includes/jquery-ui-1.8.2.custom.min.js"></script> <script type="text/javascript"> jQuery.fn.swapWith = function(to) { return this.each(function() { var copy_to = $(to).clone(true); var copy_from = $(this).clone(true); $(to).replaceWith(copy_from); $(this).replaceWith(copy_to); }); }; $(document).ready(function() { options = {revert: true}; $("li").draggable(options) $('#wrapper').droppable({ drop: function(event, ui) { $(ui.draggable).swapWith($('#two')); } }); }); </script> </head> <body> <form> <ul id="wrapper"> <li id='one'> <div style="width: 100px; height: 100px; border: 1px solid green"> one<br /></div> </li> <li id='two'> <div style="width: 110px; height: 110px; border: 1px solid red"> two<br /></div> </li> </ul> <br /> </form> </body> </html>

    Read the article

  • Upon reboot, Linux software raid fails to include one device of a RAID1 array

    - by user1389890
    One of my four Linux software raid arrays drops one of its two devices when I reboot my system. The other three arrays work fine. I am running RAID1 on kernel version 2.6.32-5-amd64 (Debian Squeeze). Every time I reboot, /dev/md2 comes up with only one device. I can manually add the device by saying $ sudo mdadm /dev/md2 --add /dev/sdc1. This works fine, and mdadm confirms that the device has been re-added as follows: mdadm: re-added /dev/sdc1 After adding the device and allowing the array time to resynch, this is what the output of $ cat /proc/mdstat looks like: Personalities : [raid1] md3 : active raid1 sda4[0] sdb4[1] 244186840 blocks super 1.2 [2/2] [UU] md2 : active raid1 sdc1[0] sdd1[1] 732574464 blocks [2/2] [UU] md1 : active raid1 sda3[0] sdb3[1] 722804416 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 6835520 blocks [2/2] [UU] unused devices: <none> Then after I reboot, this is what the output of $ cat /proc/mdstat looks like: Personalities : [raid1] md3 : active raid1 sda4[0] sdb4[1] 244186840 blocks super 1.2 [2/2] [UU] md2 : active raid1 sdd1[1] 732574464 blocks [2/1] [_U] md1 : active raid1 sda3[0] sdb3[1] 722804416 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 6835520 blocks [2/2] [UU] unused devices: <none> During reboot, here is the output of $ sudo cat /var/log/syslog | grep mdadm : Jun 22 19:00:08 rook mdadm[1709]: RebuildFinished event detected on md device /dev/md2 Jun 22 19:00:08 rook mdadm[1709]: SpareActive event detected on md device /dev/md2, component device /dev/sdc1 Jun 22 19:00:20 rook kernel: [ 7819.446412] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446415] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446782] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446785] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.515844] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.515847] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.606829] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.606832] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855616] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855620] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855950] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855952] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8027.962169] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8027.962171] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8028.054365] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8028.054368] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.588662] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.588664] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.601990] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.601991] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.602693] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.602695] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.605981] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.605983] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.606138] mdadm: sending ioctl 800c0910 to a partition! Jun 22 19:10:23 rook kernel: [ 9.606139] mdadm: sending ioctl 800c0910 to a partition! Jun 22 19:10:48 rook mdadm[1737]: DegradedArray event detected on md device /dev/md2 Here is the result of $ cat /etc/mdadm/mdadm.conf: ARRAY /dev/md0 metadata=0.90 UUID=92121d42:37f46b82:926983e9:7d8aad9b ARRAY /dev/md1 metadata=0.90 UUID=9c1bafc3:1762d51d:c1ae3c29:66348110 ARRAY /dev/md2 metadata=0.90 UUID=98cea6ca:25b5f305:49e8ec88:e84bc7f0 ARRAY /dev/md3 metadata=1.2 name=rook:3 UUID=ca3fce37:95d49a09:badd0ddc:b63a4792 Here is the output of $ sudo mdadm -E /dev/sdc1 after re-adding the device and letting it resync: /dev/sdc1: Magic : a92b4efc Version : 0.90.00 UUID : 98cea6ca:25b5f305:49e8ec88:e84bc7f0 (local to host rook) Creation Time : Sun Jul 13 08:05:55 2008 Raid Level : raid1 Used Dev Size : 732574464 (698.64 GiB 750.16 GB) Array Size : 732574464 (698.64 GiB 750.16 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Update Time : Mon Jun 24 07:42:49 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 5fd6cc13 - correct Events : 180998 Number Major Minor RaidDevice State this 0 8 33 0 active sync /dev/sdc1 0 0 8 33 0 active sync /dev/sdc1 1 1 8 49 1 active sync /dev/sdd1 Here is the output of $ sudo mdadm -D /dev/md2 after re-adding the device and letting it resync: /dev/md2: Version : 0.90 Creation Time : Sun Jul 13 08:05:55 2008 Raid Level : raid1 Array Size : 732574464 (698.64 GiB 750.16 GB) Used Dev Size : 732574464 (698.64 GiB 750.16 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Mon Jun 24 07:42:49 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 98cea6ca:25b5f305:49e8ec88:e84bc7f0 (local to host rook) Events : 0.180998 Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 49 1 active sync /dev/sdd1 I also ran $ sudo smartctl -t long /dev/sdc and no hardware issues were detected. As long as I do not reboot, /dev/md2 seems to work fine. Does anyone have any suggestions?

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >