Search Results

Search found 21310 results on 853 pages for 'multiple domains'.

Page 126/853 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Using multiple wifi connections simultaneously on Windows

    - by Salman A
    My office PC has a one wireless network card and there are three available wifi connections: primary, backup and backup of a backup (grin). Is it possible for me to use all three simultaneously. If this results in an increase in bandwidth that's well and good, but primary reason is every now and then one of the network fails and i have to switch back and forth between the available networks by disconnecting, viewing available networks and connecting to next one hoping its running. Do i need more than one network card or a software e.g. a proxy.

    Read the article

  • Using multiple wifi connections simultaneously on Windows

    - by Salman A
    My office PC has a one wireless network card and there are three available wifi connections: primary, backup and backup of a backup (grin). Is it possible for me to use all three simultaneously. If this results in an increase in bandwidth that's well and good, but primary reason is every now and then one of the network fails and i have to switch back and forth between the available networks by disconnecting, viewing available networks and connecting to next one hoping its running. Do i need more than one network card or a software e.g. a proxy.

    Read the article

  • How to get Windows Server 2008 VM to use multiple cores

    - by David Fraser
    I have a Windows Server 2008 machine running in VirtualBox. On initial installation, only one processor was made available, but now I want to run it as a multiprocessor machine. I have made all four cores available in the VirtualBox settings (as well as enabling VT-x/AMD-V and Nested Paging), but Task Manager still only shows one CPU. However, the four CPU cores are visible in Device Manager under Processors. In the event log on startup, I can see the following relevant events: EventLog.6009 Microsoft (R) Windows (R) 6.00.6002 Service Pack 2 Multiprocessor Free Kernel-Processor-Power.4 Processor 0 exposes the following: 1 idle state(s), 0 performance state(s), 0 throttle state(s) Kernel-Processor-Power.4 Processor 255 exposes the following: 0 idle state(s), 0 performance state(s), 0 throttle state(s) Kernel-Processor-Power.4 Processor 255 exposes the following: 0 idle state(s), 0 performance state(s), 0 throttle state(s) Kernel-Processor-Power.4 Processor 255 exposes the following: 0 idle state(s), 0 performance state(s), 0 throttle state(s) How can I make this system actually boot up as a multiprocessor machine?

    Read the article

  • Backing up server and multiple clients

    - by inquam
    I'm running a Amahi server. It's basically a Fedora14 x64 installation. I'm looking for a good solution to backup my 200GB system drive on the server to an external USB/eSATA drive every night. I looked into using dd but since other things might be running on the server at the same time it didn't feel quite safe. I would like the backups to be incremental so the following backups after the initial one would be quite fast. The backup should also be bootable or prehaps be able to produce a bootable disk after booting from a CD or something. I would also like the server to be able to do similar backups of my clients running Ubuntu, Windows 7 x64, Windows 7 Starter, OSX Lion, Windows XP and so on. So no applications backing up only shared folders or something like that. My guess is a client daemon would have to exist that would lock the system to allow backup of a Windows system drive that can otherwise be quite cranky. Booting up a CD in a crashed client and connecting to the server restoring the latest backup and being up running is my ideal goal. Is there anything out there that would fit these needs?

    Read the article

  • How to serve pages through multiple frameworks/template engines efficiently

    - by Leftium
    I would like to render a file that has both PHP tags and Web2Py tags mixed together. To do this, I would like the web server to pass the file through Web2Py, then PHP. I found a method to call PHP from Web2py via Python (based on this method for running PHP on top of django), but this method loses the benefits of any server optimizations from mod_php or FastCGI like caching and multi-threaded operation. A new process is created for each PHP request, which is very slow. Is there a better way to efficiently render pages with both Web2Py(Python) and PHP tags in the same file? Note I am not looking for methods of serving PHP-only and Web2Py-only files from the same server/domain. I prefer solutions for Apache2 or Cherokee. I'm open to using other web servers, though. Background info: I prefer to develop in Web2Py, but we have this pre-existing system written in PHP. I would like to augment the PHP system with some of Web2Py's features like auth authentication/user management and the T() internationalization object. Also it would make it much easier to port the PHP project to Web2Py if it could be done piecemeal. Since the PHP project consists of many files, it would greatly help if they did not need modification.

    Read the article

  • How do I set permissions structure for multiple users editing multiple sites in /var/www on Ubuntu 9

    - by Michael T. Smith
    I'm setting up an Ubuntu server that will have 3 or 4 VirtualHosts that I want users to be able to work in (add new files, edit old files, etc.). I currently plan on storing the sites in /var/www but wouldn't be opposed to moving it. I know how to add new users, I know how to add new groups. I'm unsure of the best way to handle users being only able to edit some sites. I read over the answers here in this question, so I was thinking I could setup a group and add users to that group, but then they'd all have essentially the same permissions. Am I just going to have to assign each user specific permissions? Or is there a better way of handling this? Added: I should also note, that I'll have each user login in via SSH/sFTP. The users would never need to do anything else on the server.

    Read the article

  • Skype DualPhone - Multiple Accounts

    - by Richard Hedges
    I have two Skype DualPhone 3088's. I signed in to Skype on one of the phone's, and it signed in to the account on both phone's. What I've been trying to do is have one phone signed in to one Skype account and the other signed into another account. I can't find anything to help on their website as when I browse to my specific product it redirects to their homepage. Is what I'm trying to do possible? If so, can someone help explain how to do it?

    Read the article

  • How to use multiple dns?

    - by Enrichman
    When I connect at work the net is going to assign me a dns that is working fine. After that when I connect to VPN I'm going to receive a different dns. With this one I can reach the server of the vpn owner but I'm not able to go to the internet. BUT if I switch the dns with the old ones I'm able to surf again (still connected to the vpn, but I cannot surf their server). Recap: DNS1) MyPC - CompanyProxy - Internet DNS2) MyPc - CompanyProxy - VPN - NoInternet (can Ping vpn servers) DNS1) MyPC - CompanyProxy - VPN - Internet (cannot ping vpn servers) Weirdest thing: I'm able to do a nslookup from anywhere, but ping is going to fail. Is possible to use both DNS? Or setup a dns just on the browser? I'm quite lost..

    Read the article

  • How do boot sectors and multiple drives works?

    - by GiH
    I don't fully understand the concept of a boot sector, I was hoping someone could clear this up for me. If you have two hard drives, with an OS installed on each, does each drive have its own boot sector? Does each drive need an MBR partition? I've got Linux and Windows on two separate drives. I've had issues when installing Linux and grub, and now I've finally decided to use the Windows bootloader to start up. Would Windows have gotten rid of grub when i used /fixmbr or does it stay there on the boot sector of the other drive?

    Read the article

  • Video desktop recording and multiple WM displays, capturing nonactive display

    - by okobaka
    Two WM running on one local machine. WM - Fluxbox. Using ffmpeg to record desktop. ffmpeg -an -f x11grab -s 1920x1080 -r 25 -i :1.0 -sameq /tmp/video.mkv On one display everything works great, but not when i have another WM display startx -- :1. What i am doing right now is to switch ctrl+alt+f8 to display:1.0, and start recording with ffmpeg. Everything is fine until i switch back ctrl+alt+f7 to display:0.0, WM and captured video image freezes, but when i switch back ctrl+alt+f8 to display:1.0, it unfreeze and continue recording. So, how to make display:1.0 not to freeze, while on display:0.0? Tested some more. open [display 0.0] open [display 0.1] from [display 0.0] = open => [display 0.2] same problem For different users and same users results are the same. ffmpeg keeps recording that paused image. Looks like WM root window need to be active, to be recorded.

    Read the article

  • Multiple, Simultaneous Factories and Protocols in Twisted: Same Service, Different Ports

    - by RichardCroasher
    Greetings, Forum. I'm working on a program in Python that uses Twisted to manage networking. The basis of this program is a TCP service that is to listen for connections on multiple ports. However, instead of using one Twisted factory to handle a protocol object for each port, I am trying to use a separate factory for each port. The reason for this is to force a separation among the groups of clients connecting to the different ports. Unfortunately, it appears that this architecture isn't quite working: clients that connect to one port appear to be available among all the factories (e.g., the protocol class used by each factory includes a 'self.factory.clients.append (self)' statement...instead of adding a given client to just the factory for a particular port, the client is added to all factories), and whenever I shutdown service on one port the listeners on all ports also stop. I've been working with Twisted for a short while, and fear I simply don't fully understand how its factory classes are managed. My question is: is it simply not possible to have multiple, simultaneous instances of the same factory and same protocol in use across different ports (without these instances stepping on each other's toes)?

    Read the article

  • How does one open multiple tabs in TextWrangler?

    - by Closure Cowboy
    No, I'm not bluffing. I really can't figure this out. The setup: I went to File -> Open, and then selected a directory rather than a file. As expected, a directory tree opened on the left side of my document. Hooray! I can easily view the files' structure in my Rails project. So, I make a few changes in a file, and then I click on a different file in the directory tree. My problem: TextWrangler then asks me whether I want to save my changes. Huh? I say "No", and the new document doesn't open at all. Great. I try hitting Command+N (new document). A new window opens. Ughhhh. How the heck do I open documents in a new tab? Note: I have set the "New & opened documents" behavior to "Open in the front window". This does not change the behavior described (i.e. when a directory is opened rather than a single file).

    Read the article

  • Photo Management Software for OS X That Supports Multiple Library Locations

    - by Lance Rushing
    I'm looking at the possibility of changing our photo management software. Thinking about Aperture or Lightroom. And want to know if either supports: Having folders/libraries on separate harddrives/Volumes. Tolerates having network folders occasionally connected Snappy interface A solid enough piece of software as not to crash or behave weirdly. [ so the wife doesn't get stuck and have to call me ;) ] Background The wife is photographer and I'm the computer programmer. Our current setup is Picasa, with a "Recent/Working" folder on the local iMac harddrive, and "Archive" folders on NFS mounted Linux Server, Raid 5, for redundant and extra storage capacity. (the Linux server also syncs with Amazon's EC2) Picasa is doing OK, but I get annoyed when it doesn't do behave properly. Usually around issues when the Linux disk isn't mounted. And overall, I wish Picasa seemed a little more polished and snappier. Thanks, Lance

    Read the article

  • php.ini use multiple include paths - openbasedir restriction

    - by hfidgen
    I need to allow an include path for a vhost subdomain on Plesk 10. I've edited the PHP PEAR path into /etc/php.ini as I'm happy for it to be globally available: include_path = ".:/usr/share/pear/" This works insofar as PHP is able to see the files in that directory when a script tries to include them, but I'm getting the dreaded openbasedir error: Warning: require_once() [function.require-once]: open_basedir restriction in effect. File(/usr/share/pear/xxxx.php) is not within the allowed path(s): (/var/www/vhosts/xxxx.com/subdomains/test/httpdocs/:/tmp/) Am I right in saying that the subdomain or main domain can have a vhost.conf file in which I can alter the openbasedir allowed paths? I've tried searching out solutions but I'm afraid I can't quite see one yet :)

    Read the article

  • Picture syncing on Multiple Macs, iPhones, and iPad together so each device can update them all

    - by cohortq
    Hello! One of the owners of my company has put me to task to sync his pictures between the following devices together. (2) iPhones (2) iMacs (1) Macbook Air (1) iPad Here is what is happening 1) He has a camera that can upload pictures into iPhoto in either (1) of his iMacs, or Macbook Air. 2) He has (2) different iPhones. And here are how they are paired up iPhone - iMac Home iPhone - Macbook Air 3) He has MobileMe syncing Calendar, Contacts, and Notes across all devices 4) Currently we are using MobileMe web galleries to sync all photos, by having ME create each album and upload them to the MobileMe web gallery. Not the problem is. He wants to just take pictures, and once he does that it syncs with all his devices, he'll even dock the iPad. Is there a better way to sync photos between all devices?

    Read the article

  • Ubuntu: Multiple NICs, one used only for Wake-On-LAN

    - by jcwx86
    This is similar to some other questions, but I have a specific need which is not covered in the other questions. I have an Ubuntu server (11.10) with two NICs. One is built into the motherboard and the other is a PCI express card. I want to have my server connected to the internet via my NAT router and also have it able to wake from suspend using a Magic Packet (henceforth referred to as Wake-On-LAN, WOL). I can't do this with just one of the NICs because each has an issue - the built-in NIC will crash the system if it is placed under heavy load (typically downloading data), whilst the PCI express NIC will crash the system if it is used for WOL. I have spent some time investigating these individual problems, to no avail. My plan is thus: use the built-in NIC solely for WOL, and use the PCI express card for all other network communication except WOL. Since I send the WOL Magic Packet to a specific MAC address, there is no danger of hitting the wrong NIC, but there is a danger of using the built-in NIC for general network access, overloading it and crashing the system. Both NICs are wired to the same LAN with address space 192.168.0.0/24. The built-in ethernet card is set to have interface name eth1 and the PCI express card is eth0 in Ubuntu's udev persistent rules (so they stay the same upon reboot). I have been trying to set this up with the /etc/network/interfaces file. Here is where I am currently: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.0.3 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 auto eth1 iface eth1 inet static address 192.168.0.254 netmask 255.255.255.0 I think by not specifying a gateway for eth1, I prevent it being used for outgoing requests. I don't mind if it can be reached on 192.168.0.254 on the LAN, i.e. via SSH -- it's IP is irrelevant to WOL, which is based on MAC addresses -- I just don't want it to be used to access internet resources. My kernel routing table (from route -n) is Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 My question is this: Is this sufficient for what I want to achieve? My research has thrown up the idea of using static routing to specify that eth1 should only be used for WOL on the local network, but I'm not sure this is necessary. I have been monitoring the activity of the interfaces using iptraf and it seems like eth0 takes the vast majority of the packets, though I am not sure that this will be consistent based on my configuration. Given that if I mess up the configuration, my system will likely crash, it is important to me to have this set up correctly!

    Read the article

  • How to configure remote access to multiple subnets behind a SonicWALL NSA 2400

    - by Kyle Noland
    I have a client that uses a SonicWALL NSA 2400 as their firewall. I need to setup a second LAN subnet for a handful of PC. Management has decided that there should be a second subnet even though intend to allow access across the two subnets - I know... I'm having trouble getting communication across the 2 subnets. I can ping each gateway, but I cannot ping or seem to route traffic fron subnet A to subnet B. Here is my current setup: X0 Interface: LAN zone with IP addres 192.168.1.1 X1 Interface: WAN zone with WAN IP address X2 Interface: LAN zone with IP address 192.168.75.1 I have configured ARP and routes for the secondar subnet (X2) according to this SonicWALL KB article: http://www.sonicwall.com/downloads/supporting_multiple_firewalled_subnets_on_sonicos_enhanced.pdf using "Example 1". At this point I don't minding if I have to throw the SonicWALL GVC software VPN client into the mix to make it work. It feel like I have an Access Rule issue, but for testing I made LAN LAN, WAN LAN and VPN LAN rules wide open with the same results.

    Read the article

  • Batch copy multiple folders and their subfolders to another folder

    - by DjLenny
    I have a folder X:\Export that has several folders X:\Export\Export1 X:\Export\Export2 X:\Export\Export3 etc. (names vary by a large factor) each Export folder has the same subdirectory structure but have different files. I would like to copy all the subfolders and the files of X:\Export\Export1 X:\Export\Export2 X:\Export\Export3 to a folder X:\Export\mergedExports keeping the subdirectory structure pseudocode of what I would like to do but cannot get working properly create new folder "merged" for (every folder X in a given directory Y) copy every file in X keeping directory structure to "merged" If conflict then overwrite

    Read the article

  • nginx serving php for download (previously: nginx multiple location alias 404)

    - by torsten
    Im having issues with the alias location in the following configuration server { listen 80; server_name localhost; root /srv/http/share; index index.php; include php.conf; location / { try_files $uri $uri/ /index.php$is_args$args; } location /phpmemcachedadmin { alias /srv/http/phpmemcachedadmin; } location /webgrind { alias /srv/http/webgrind; } } while / works well, im getting a 404 for /webgrind and /phpmemcachedadmin. If i switch the root directory to /srv/http and alias the / location, die /phpmemcachedadmin and webgrind work, but not the / location. UPDATE: I managed the probems getting all location to work, so here is the updated config #user html; worker_processes 2; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; server { listen 80; server_name localhost; location / { root /srv/http/share; index index.php; try_files $uri $uri/ /index.php$is_args$args; include php.conf; } location /phpmemcachedadmin { root /srv/http; index index.php; try_files $uri $uri/ /index.php$is_args$args; include php.conf; } location /webgrind { root /srv/http; index index.php; try_files $uri $uri/ /index.php$is_args$args; include php.conf; } } } The php.conf looks like this: location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/run/php-fpm/php-fpm.sock; fastcgi_index index.php; include fastcgi.conf; } while the fastcgi.conf like this: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; But there is a problem serving phpmemcachedadmin. If i call localhost/phpmemcachedadmin/index.php it works quite well (i get a log that i got served the file in access log). On the other hand, if i just call localhost/phpmemcachedadmin/ he serves me the file for download. Neither the error.log nor the access.log log anything when i get served the the file for download. Any ideas?

    Read the article

  • PL/SQL - How to pull data from 3 tables based on latest created date

    - by Nancy
    Hello, I'm hoping someone can help me as I've been stuck on this problem for a few days now. Basically I'm trying to pull data from 3 tables in Oracle: 1) Orders Table 2) Vendor Table and 3) Master Data Table. Here's what the 3 tables look like: Table 1: BIZ_DOC2 (Orders table) OBJECTID (Unique key) UNIQUE_DOC_NAME (Document Name i.e. ORD-005) CREATED_AT (Date the order was created) Table 2: UDEF_VENDOR (Vendors Table): PARENT_OBJECT_ID (This matches up to the ObjectId in the Orders table) VENDOR_OBJECT_NAME (This is the name of the vendor i.e. Acme) Table 3: BIZ_UNIT (Master Data table) PARENT_OBJECT_ID (This matches up to the ObjectID in the Orders table) BIZ_UNIT_OBJECT_NAME (This is the name of the business unit i.e. widget A, widget B) Note: The Vendors Table and Master Data do not have a link between them except through the Orders table. I can join all of the data from the tables and it looks something like this: Before selecting latest order date: ORD-005 | Widget A | Acme | 3/14/10 ORD-005 | Widget B | Acme | 3/14/10 ORD-004 | Widget C | Acme | 3/10/10 Ideally I'd like to return the latest order for each vendor. However, each order may contain multiple business units (e.g. types of widgets) so if a Vendor's latest record is ORD-005 and the order contains 2 business units, here's what the result set should look like by the following columns: UNIQUE_DOC_NAME, BIZ_UNIT_OBJECT_NAME, VENDOR_OBJECT_NAME, CREATED_AT After selecting by latest order date: ORD-005 | Widget A | Acme | 3/14/10 ORD-005 | Widget B | Acme | 3/14/10 I tried using Select Max and several variations of sub-queries but I just can't seem to get it working. Any help would be hugely appreciated!

    Read the article

  • Using multiple PaaS Vendors

    - by jpabluz
    I am developing a SaaS App, and I want to decide for a PaaS Vendor. Since one of my biggest concerns is uptime, is there an application or service, that allows me to use several PaaS Vendors (like Azure, Google App Engine, Amazon Web Services, etc.)? I want my application to be able to respond from one PaaS Vendor to another almost instantly without any downtime, to use the redundancy that this provides. This means that I need to be able to use the different services homogeneously.

    Read the article

  • Managing an application across multiple servers, or PXE vs cfEngine/Chef/Puppet

    - by matt
    We have an application that is running on a few (5 or so and will grow) boxes. The hardware is identical in all the machines, and ideally the software would be as well. I have been managing them by hand up until now, and don't want to anymore (static ip addresses, disabling all necessary services, installing required packages...) . Can anyone balance the pros and cons of the following options, or suggest something more intelligent? 1: Individually install centos on all the boxes and manage the configs with chef/cfengine/puppet. This would be good, as I have wanted an excuse to learn to use one of applications, but I don't know if this is actually the best solution. 2: Make one box perfect and image it. Serve the image over PXE and whenever I want to make modifications, I can just reboot the boxes from a new image. How do cluster guys normally handle things like having mac addresses in the /etc/sysconfig/network-scripts/ifcfg* files? We use infiniband as well, and it also refuses to start if the hwaddr is wrong. Can these be correctly generated at boot? I'm leaning towards the PXE solution, but I think monitoring with munin or nagios will be a little more complicated with this. Anyone have experience with this type of problem? All the servers have SSDs in them and are fast and powerful. Thanks, matt.

    Read the article

  • Can I use wildcards is puppet package ensure to cover multiple releaseversion

    - by Rob van den Eijnde
    Using puppet I want to update packages on my (CentOS 5 & 6 servers) in a controlled way. Therefore I don't want to use ensure=>latest but rather ensure=>3.0.1-1. Example: class puppet::installation inherits puppet { package { "puppet": ensure => "3.0.1-1", } } The update works alright but puppet agent keeps complaining that there is a difference: /Stage[main]/Puppet::Installation/Package[puppet]/ensure: current_value 3.0.1-1.el6, should be 3.0.1-1 (noop) I can solve this by changing the ensure rule to 3.0.1-1.el6 but than that won't work on CentOS 5. Is there a short/clean way to solve this or do I have to write to seperate, os-releaseversion dependant rules. I have been googling for a solution but didn't find anything pertaining to this particular question. Any suggestion or reference to a relevant example would be appreciated.

    Read the article

  • low performance on HPC cluster (sge) when running multiple jobs

    - by Yotam
    O know this is a long-shot but I'm clueless here. I'm running several computer simulations on High Performance Computation cluster (HPC) of oracale grid engine (sge). A single job runs at a certain speed (roughly 80 steps per second) when I add jobs to the machine, at a certain treshhold, the speed is recuded by two. On one machine (I don't know the cpu kind) the treshold is 11 jobs for 16 cpu's. On another one with the same number and kind of cpu's , the treshold is 8. I thought at first that this is a memory issue but each job takes about 60MB - 100MB and I have 16GB of ram on each of those machine. Did any of you encountered such a problem? is there any way to analyz this? Thanks.

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >