Search Results

Search found 23062 results on 923 pages for 'multiple models'.

Page 769/923 | < Previous Page | 765 766 767 768 769 770 771 772 773 774 775 776  | Next Page >

  • How to limit reverse SSH tunelling ports?

    - by funktku
    We have a public server which accepts SSH connections from multiple clients behind firewalls. Each of these clients create a Reverse SSH tunnel by using the ssh -R command from their web servers at port 80 to our public server. The destination port(at the client side) of the Reverse SSH Tunnel is 80 and the source port(at public server side) depends on the user. We are planning on maintaining a map of port addresses for each user. For example, client A would tunnel their web server at port 80 to our port 8000; client B from 80 to 8001; client C from 80 to 8002. Client A: ssh -R 8000:internal.webserver:80 clienta@publicserver Client B: ssh -R 8001:internal.webserver:80 clientb@publicserver Client C: ssh -R 8002:internal.webserver:80 clientc@publicserver Basically, what we are trying to do is bind each user with a port and not allow them to tunnel to any other ports. If we were using the forward tunneling feature of SSH with ssh -L, we could permit which port to be tunneled by using the permitopen=host:port configuration. However, there is no equivalent for reverse SSH tunnel. Is there a way of restricting reverse tunneling ports per user?

    Read the article

  • No sound Ubuntu 10.10 (have SB Live! card)

    - by Chris Frazier
    I am new to Linux/Ubuntu and I just installed 10.10 on my Dell Desktop PC that was running Windows XP. The PC is about 5-6 years old and the sound card it has is a SoundBlaster Live! card. The Sound Preferences recognizes the card as [SB Live! Value] EMU10k1X. It is currently set to Analog Stereo Duplex. I tried multiple other sound configurations and nothing works. No sound but some clicks come out of the speakers. I ran the system test and during the audio tests the same thing happened, just clicks and pops. I tried to play some music with Rhythmbox and when it tries to play a track it tries for several seconds and then just closes itself, or sometimes just doesn't play. I ran a check for drivers, but the only driver it found to install was for my nVidia video card. It did not show any drivers for the sound card. Does anyone have any idea what I need to do to get the sound to work?

    Read the article

  • Google Chrome is running my system out of memory

    - by jasondavis
    I am running Windows 7 x64 with 12GB of RAM I often have multiple windows and a ton of tabs open. I use the extension Session Buddy to restore all my windows and tabs once the memory gets too high. So my 12gb of ram will get up to around 93% used because of Chrome, now I can close chrome down and restore the same amount of windows and tabs and it will only use about 25% of memory, it then over time increases back up to the 90% zone after several hours. It seems that when I close tabs, instead of freeing that memory up, it doesn't so that is why the huge increase of memory usage as new tabs are opened and closed it just adds up, this sounds like a huge bug in chrome. Just for an example I just re-booted my system, I only have 1 window with 4 tabs open and in the task manager, it shows 29 chrome.exe processes I then killed all chrome processes and opened a chrome window with just 1 tab, it made 27 chrome.exe processes. Is this an issue that others have? More importantly, is there a fix? UPDATE I just read that each plugin and extension creates a chrome.exe process, I then couunted 24 extensions so that helps explain a portion of the large processes. Still not sure about memory not being freed up though!

    Read the article

  • iTunes Home Sharing only works one way between 2 Windows XP PC's on the same LAN

    - by scunliffe
    Both PC's have the latest iTunes installed. PC (A) can "see" that there is a shared library "B library" but attempts to connect to it return this error message: The shared library "{Username}'s Library" is not responding (-3259) Check that any firewall software running on either the shared computer or this computer has been set to allow communication on port 3689. however the reverse works fine. e.g. PC (B) can "see" shared library "A library" and can access all content. Notes: Both PC's have Home Sharing enabled (turned off/on several times to verify). Both PC's have Windows Firewall turned on, but in the exceptions tab, iTunes is allowed, and Port 3689 is also added as a firewall exception (just in case) Both iTunes accounts have been "authorized" on both PC's Both PC's connect via LAN via D-Link DIR-615 router. In the advanced application rules, iTunes has also been added to allow traffic on port 3689 un-hindered. Is there any other magical setting/configuration option that I should be aware of and set in order to get this to work? I could care less about sharing apps etc. I just want the music sharing to work. Update: Solved! It turns out on PC (B) there were multiple accounts set up. 1 of the accounts had the checkbox checked under the Windows firewall "On" option which states "No exceptions" thus even though it was added to the exception list on the main user account, this other account was blocking access.

    Read the article

  • Measure Upload Speed between a client and our server

    - by tresstylez
    We host a SAAS application specially customized for multiple clients. For one customer in particular -- they are reporting sporadic performance issues from various locations on their network, in particular UPLOADING documents through a form on our website. The client claims they have "bandwidth to spare" and that utilization of their "pipe" is so low that it MUST be our application, but our application has MANY clients and all features are working fine for all other clients. Interestingly enough -- DOWNLOADS (ie. just accessing the website, or downloading documents) is working fine. Speed test shows that they should get 1.2Mbps UP. So, a 3MB file should take 20 secs to upload. It takes 60+ seconds on their network. Sometimes even small files take OVER 10 minutes to upload or they timeout. Pings and Traceroutes don't show any abnormally long hops or response times. They claim other SAAS applications they use allow them to upload just fine. Both IT teams are working together to resolve this issue. What kind of data can I request from the clients to begin ruling things out. Seems like we need to somehow measure LATENCY of the networks involved or even at the switch level, we need to understand if packets are getting dropped somewhere and why. Where should I start? Any help is appreciated. I'll provide more info upon requests

    Read the article

  • Apache, logerror and logrotate: what is the best method?

    - by OlivierDofus
    Hi! Here's a vhost example of my sites: <VirtualHost *:80> DocumentRoot /datas/web/woog ServerName woog.com ServerAlias www.woog.com ErrorLog "|/httpd-2.2.8/bin/rotatelogs /logs/woog/error_log 86400" CustomLog "|/httpd-2.2.8/bin/rotatelogs /logs/woog/access_log 86400" combined DirectoryIndex index.php index.htm <Location /> Allow from All </Location> <Directory /*> Options FollowSymLinks AllowOverride Limit AuthConfig </Directory> </VirtualHost> I've got 12 sites running now. This gives something like: [Shake]:/sources/software/mod_log_rotate# ps x | grep rotate /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/error_log 86400 /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/error_log 86400 [snap (as many error_log as virtual hosts)] /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/access_log 86400 /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/access_log 86400 [snap (as many access_log as virtual hosts)] grep rotate [Shake]:/sources/software/mod_log_rotate# !!! I've been looking everywhere but I've only found mod_log_rotate. The "little" problem is that the author (very good C developper) explains: "Unfortunately Apache error logs are handled in such a way that we can't work the same log rotation magic on them. Like transfer logs they support piped logging though so you can still use rotatelogs for them. " So my question is: what would be the best way to handle multiple logs? If I just do a very classical log and I use the system's "logrotate" program couldn't this be a good deal? How would/do you deal with that? Thank you!

    Read the article

  • Docking Station Sound Doesn't Work on Dell D830 with Windows 7

    - by cisellis
    EDIT: I can only mark one answer as the correct one but the actual solution was a combination of two comments (updating the BIOS to A15 AND installing the Sigmatel audio drivers). I have a Dell Latitude D830 laptop that is running Windows 7 Enterprise x64. I connect to a docking station during the day with multiple monitors, a keyboard and a mouse. Everything runs with no problems including most of the docking station ports (usb, monitors, etc.) However, the sound port from the docking station does not work since the upgrade to Windows-7. Even with the laptop plugged in, the sound always comes out of the laptop, not the headphones plugged into the docking station. Here's what I've tried: I've seen other issues like via Google this that seem to be mostly unanswered. I found one or two that referenced using the Vista x64 drivers, especially the Nvidia drivers. I do not have an Nvidia chipset but I've reinstalled the sound drivers and that has not helped. I don't have a support contract and considering the cost is usually high to call Dell, that's not an option. Dell's forums are pretty much a wasteland and I've found no help there. Since this is a docking station I thought I might need to try the SATA or Intel chipset drivers from the dell site instead, however I'm not really sure and I need to work on this laptop in the meantime. I can't really afford the downtime to experiment with random drivers all day in case they turn out to be incompatible (Dell still hasn't added Windows 7 to their support site as far as I can tell). Does anyone have any other ideas? Has anyone had this issue and solved it? If so, how? Thanks in advance for your help.

    Read the article

  • Why Would one VLAN have no Communication on one Switch?

    - by Webs
    So the problem is we have a device or host that needs to communicate on a specific VLAN. This VLAN is not new, it is running all throughout our environment and works fine. But the VLAN was recently configured on the switch in question, a Cisco 3750. The DHCP server is handing out addresses on that VLAN with no problem. I have verified the cable between the host and switch and tried multiple hosts, but none of them can communicate or get an address. I plugged my laptop into an empty port which had a different VLAN assigned and immediately got a DHCP address. When I changed that port to the same VLAN I'm having issues with I got the same problem. The laptop just sits there and tries to DHCP an address but nothing happens. I double checked the cores and their Layer 3 VLAN config and its fine too. Plus I figured the issue couldn't be with them because the VLAN works fine everywhere else it exists. So the only other thing I can think of is the switch, but the VLAN exists on the switch and seems to be configured correctly. The trunks appear to be configured just fine as well too. Anyone have any ideas? I'm lost on this one.

    Read the article

  • Microsoft Word "Random" Crashes

    - by Bent Rasmussen
    Word seemingly randomly crashes in an application setting where it is first being used to programmatically databind (bookmarks) and then directly afterwards opened on the user machine for further user input. The error message is quite precise but the workaround has eluded me. Word crashes a moment or two after it has been opened on the user machine with the below exception details. Problem signature: Problem Event Name: APPCRASH Application Name: WINWORD.EXE Application Version: 14.0.6129.5000 Application Timestamp: 5082f354 Fault Module Name: wwlib.dll Fault Module Version: 14.0.6129.5000 Fault Module Timestamp: 5082f3dc Exception Code: c0000005 Exception Offset: 000eed32 OS Version: 6.1.7601.2.1.0.16.7 Locale ID: 1030 Additional information about the problem: LCID: 1030 skulcid: 1030 Sometimes one can run the exact same scenario 50 times before experiencing a crash, other times only a few times. We have tried using different versions of the Word format as well as renaming the databound file after saving so that the file being opened on the user machine is different. Principally Word should never crash but perhaps there is some workaround that can make Word not crash. Googling for a solution there appears to be multiple things that can trigger this bug.

    Read the article

  • How to access XAMPP virtual hosts from iPad on local network?

    - by martin's
    Using XAMPP on one machine. Multiple virtual hosts defined. One per project. Format is .local For example: apple.local microsoft.local client-site.local our-own-internal-site.local All works perfectly from that one machine. I now want to have other systems within the network access the various sites. The main reason for wanting to do this is to be able to test site functionality and layout from mobile devices without having to upload partial work to public servers. I can access the main XAMPP default site by simply entering the IP address of the XAMPP machine in, say, Safari on an iPad. However, there is no way to reach .local that I can see. Would this entail setting up a DNS server within the network? We have a mixture of Windows and Mac machines. No Linux. The XAMPP machine is Vista 64. I don't want real external internet access to be affected in any way, just ".local" pointed to the XAMPP machine if that makes sense.

    Read the article

  • What user information is exposed via a browser?

    - by ipso
    Is there a function or website that can collect and display ALL of the user information that can be obtained via a browser? Background: This of course does not account for the significant cross-reference abilities of large corporations to collate multiple sources and signals from users across various properties, but it's a first step. Ghostery is just a great idea; to show people all of the surreptitious scripts that run on any given website. But what information is available – what is the total set of values stored – that those scripts can collect from? If you login to a search engine and stay logged in but leave their tab, is that company still collecting your webpage viewing and activity from other tabs? Can past or future inputs to pages be captured – say comments on another website? What types of activities are stored as variables in the browser app that can be collected? This is surely a highly complex question, given to countless user scenarios – but my whole point is to be able to cut through all that – and just show the total set of data available at any given point in time. Then you can A/B test and see what is available with in a fresh session with one tab open vs. the same webpage but with 12 tabs open, and a full day of history to boot. (Latest Firefox & Chrome – on Win7, Win8 or Mint13 – although I'd like to think that won't make too much of a difference. Make assumptions. Simple is better.)

    Read the article

  • Users getting 'flooded' with not read notifications (NRNs) for old emails and meeting requests

    - by Exile
    I'm being placed under quite a lot of pressure from senior management over a relatively trivial issue. Basically the vast majority of users are complaining that they receive not read notifications (NRNs) for old emails and meeting requests in large numbers multiple times a day. I know something strange is happening because some are delivered at silly times in the morning (i.e 3AM or 4AM). The problem I have is that these some of these NRNs are from meeting requests and messages that are 120 days old, so some users have deleted the original message so I don’t actually know if the NRN is from an email or meeting request. This is typical of what users receive as a NRN: From: Sender Sent: 23 March 2012 04:16 To: Recepient Subject: Not read: Accepted: Status update Your message To: Sender Subject: Accepted: Status update Sent: Wednesday, November 23, 2011 8:59:00 AM (UTC) Dublin, Edinburgh, Lisbon, London was deleted without being read on Friday, March 23, 2012 4:15:32 AM (UTC) Dublin, Edinburgh, Lisbon, London. ... From: Sender Sent: 18 March 2012 01:13 To: Recepient Subject: Not read: Gold delivery - Sourcing module Your message To: Sender Subject: Gold delivery - Sourcing module Sent: Friday, November 18, 2011 9:37:58 AM (UTC) Dublin, Edinburgh, Lisbon, London was deleted without being read on Sunday, March 18, 2012 1:12:37 AM (UTC) Dublin, Edinburgh, Lisbon, London. I have done a search and found the following: http://support.microsoft.com/kb/2544246 http://support.microsoft.com/kb/2471964 But we already installed 'Update Rollup 6 for Exchange Server 2010 Service Pack 1' back in December, so I am not sure what we can do to fix this?

    Read the article

  • Many different BSOD

    - by Exa
    I'm experiencing multiple bluescreens for a couple of months now. Their error code is as diverse as their time of occurence... Sometimes it happens during gaming, sometimes when watching videos, sometimes when the computer is idle. These are the bluescreens I see most often: PAGE_FAULT_IN_NONPAGED_AREA KMODE_EXCEPTION_NOT_HANDLED IRQL_NOT_LESS_OR_EQUAL SYSTEM_SERVICE_EXCEPTION SYSTEM_THREAD_EXCEPTION INTERRUPT_EXCEPTION_NOT_HANDLED DRIVER_IRQL_NOT_LESS_OR_EQUAL DRIVER_OVERRAN_STACK_BUFFER Responsible drivers (according to the memory dumps): hal.dll tcpip.sys dxgmms1.sys ndis.sys mouhid.sys atikmdag.sys dump_atapi.sys and of course: ntoskrnl.exe My first thought was a driver incompatibility because I am using Windows 8 and some of the bluescreens seem to come from driver issues. All drivers are up to date. I'm afraid that my memory is broken or the mainboard or both. I used the windows integrated memtest which didn't find any errors. Memtest86 found some errors. Does it make sense to buy new memory? Couldn't it be a problem of the board as well? I also read that my memory could run at a too low voltage. But it's set to 1.5V as recommended. Another guess would be to set the memory's latencies manually, but how do I know which ones to try? Here is a screenshot of bluescreenview showing the latest bluescreens. Maybe someone has faced the same behavior before and found a solution. Any ideas or suggestions? Current setup on which the bluescreens occur. Windows 8 RTM (6.2.9200) Asrock 970 Extreme4 AMD FX-8150 ATI Radeon HD5850 16 GB RAM (DDR3-1800) Latest drivers for all devices

    Read the article

  • Nginx Ubuntu Postfix Config - Can't connect to incoming IMAP server 'server not responding' but can send mail via outgoing using same details?

    - by daveaspinall
    I'm pretty to new server admin and especially nginx but seem to be getting ok fine apart from accessing my mail via my iPhone? I've changed my domain to 'domain.com' The thing is I can send mail via my outgoing IMAP server but can't connect to the incoming one? I just get the message "the mail server at mail.domain.com is not responding" /etc/postfix/main.cf alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix home_mailbox = Maildir/ inet_interfaces = all inet_protocols = all mailbox_command = mailbox_size_limit = 0 mydestination = domain.com, mail.domain.com, localhost.com, , localhost, localhost.localdomain mydomain = domain.com myhostname = mail.domain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname recipient_delimiter = + relayhost = smtp_tls_note_starttls_offer = yes smtp_tls_security_level = may smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_security_level = may smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom telnet localhost 25 ehlo locahost 250-mail.domain.com 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH LOGIN PLAIN 250-AUTH=LOGIN PLAIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN Using the following details to connect: username password hostname: mail.domain.com port: 25 iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination I also sent mail to the server as a test and got this missage if it helps? Technical details of temporary failure: [mail.domain.com. (10): Connection refused] I also looked in /var/log/mail.log and it has multiple entries of: postfix/smtpd[12239]: connect from 5acefc9a.bb.sky.com[90.206.252.xxx] Mar 23 06:47:09 new-domain postfix/smtpd[12239]: lost connection after CONNECT from 5acefc9a.bb.sky.com[90.206.252.154] Notice new-domain which is incorrect but the server hostname and hostname in the configs are correct? I recently moves servers and the host has set the primary domain on the service as new-domain.com so this may be the issue? Like I said, it works to connect to outgoing server, but incoming gets the not responding error? Any idea would be much appreciated!

    Read the article

  • Suspending/Screen Going Off When Still In Use (Ubuntu & Arch)

    - by luke
    I have a laptop (HP Pavilion G6) that was running Ubuntu and for a while now (at least 6 months) has been having a problems randomly suspending whilst still in use with a full battery and still being charged. Originally the problem was with Ubuntu so I first attempted to disable suspend using every way I could find (gui settings + dconf editor) this didn't work and it still kept suspending so I ended up switching to Arch Linux. Unfortunately not long after switching to Arch Linux I ended up experiencing the same problems. So yet again I modified the settings in /etc/systemd/logind.conf to prevent it from suspending and this time it worked, kind of. Now I am experiencing the screen going off and I have to change to a different tty (by using ctrl-alt-fx, which was something I also found I had to do sometimes when waking up from suspend in Ubuntu) to get the screen to go back on. The strange thing is this only happens when running a Linux distros and only occasionally (e.g. it may happen once/twice a week at most). But when it does happen it can happen multiple times in a row. And it only seems to happen when I am using it. This may just mean that it hasn't happened yet when I am not but generally if I leave it to run something or play a video it hasn't occurred only when I am using it regardless of which program I am using (e.g. it has occurred when using firefox, vim, even when using a virtualbox vm). At first I thought it could be the CPU temperature but after monitoring it I discovered it occurred a lot of the time when my CPU was less than 50 °C. I then checked /var/log/* but could not see anything related to it suspending only a few standard things from when it was woken up. I am really out of ideas and really hoping someone can help. Thanks in advance.

    Read the article

  • Apache Named Virtual Hosts and HTTPS

    - by Freddie Witherden
    I have an SSL certificate which is valid for multiple (sub-) domains. In Apache I have configured this as follows: In /etc/apache2/apache2.conf NameVirtualHost <my ip>:443 Then for one named virtual host I have <VirtualHost <my ip>:443> ServerName ... SSLEngine on SSLCertificateFile ... SSLCertificateKeyFile ... SSLCertificateChainFile ... SSLCACertificateFile ... </VirtualHost> Finally, for every other site I want to be accessible over HTTPS I just have a <VirtualHost <my ip>:443> ServerName ... </VirtualHost> The good news is that it works. However, when I start Apache I get warning messages [warn] Init: SSL server IP/port conflict: Domain A:443 (...) vs. Domain B:443 (...) [warn] Init: SSL server IP/port conflict: Domain C:443 (...) vs. Domain B:443 (...) [warn] Init: You should not use name-based virtual hosts in conjunction with SSL!! So, my question is: how should I be configuring this? Clearly from the warning messages I am doing something wrong (although it does work!), however, the above configuration was the only one I could get to work. It is somewhat annoying as the configuration files have an explicit dependence on my IP address.

    Read the article

  • Web based KVM management for Ubuntu

    - by Tim
    We've got a single Ubuntu 9.10 root server on which we want to run multiple KVM virtual machines. To administer these virtual machines I'd like a web based KVM management tool, but I don't know which one to choose from the list of tools mentioned on linux-kvm.org. I've used virsh & virt-manager on my desktop, but would like a web interface for the server. I tested ConVirt on my desktop, but it failed to pickup KVM machines from virsh / virt-manager, and I could not get KVM virtual machine import to work (only Xen). oVirt looks good, but I can't find out if and how I can install it on Ubuntu 9.10.. (And I'd really rather not waste another few days on testing stuff that might not work in the end.) Can anyone recommend any good web based KVM management tools that are easy to install on Ubuntu 9.10? I'm looking for something that will also allow me to run other services like apache and postgresql besides hosting virtual machines, so preferably fairly lightweight & no dedicated OS installs. We don't need any professional clustering / migration or anything, just something that will let us create, start, inspect, administer & stop virtual machines from a web page. Best regards, Tim Update: Anyone have any suggestions? It's awfully quiet here..

    Read the article

  • What are good systems for managing PHP/MySQL infrastructure?

    - by sbrattla
    I work in a company which is about to migrate most applications from in-house custom built Java/Tomcat applications to Drupal. Due to company policies, applications and websites need to run on in-house servers. This means that we need infrastructure for Drupal (PHP/MySQL) applications. This must have been solved a million times already. I believe this is what web-hosting companies does every day. Even though we work on a much smaller scale than web-hosting companies, i assume it would make sense to look at the task as if we're going to have an internal small-scale web-hosting company. This means that the guys in IT operations could be "responsible" for "offering" web-hosting, while developers could use these "services". We have three environments; dev(elopment), test and prod(uction). It would make sense that developers could log in to a system and create/edit/delete dev and test sites as they'd like. Production sites should be available through the same system, but only available to IT ops. We need to work with clusters of web servers, meaning that an administration system should be capable of creating/editing/deleting sites across multiple servers. I know there's no "this is it" answer to my question; but what would be a good place to start to get going with this? Apart from the actual hardware, what would be a good administration system for this?

    Read the article

  • Interaction between two Clouds

    - by Snehal Masne
    I have setup the Cloud-A with 1 - [CLC+CC] and 2 - [NC] computers. I have another Cloud-B with same configuration. [using the Ubuntu Enterprise Cloud] Both of them working fine individually, in the same LAN. Now if I want to add the NC of Cloud-A to CC of Cloud-B, [in case the resources of Cloud-B are exhausted] how can I make it possible ? I guess this calls for the interoperability stuff... Could you please explain what happens exactly when we ask for instance, the direct interaction happens between the client and NC or it goes through the CLC and CC ? What I want to say is, say there are multiple cloud providers. A user is subscribed to any one of them, say Cloud-A for IaaS. As the requirements are dynamic, all the resources of Cloud-A may get exhausted. There may be another Cloud-B which can provide the services but that Cloud-A can't ask the client to go for Cloud-B. So if it is possible to have some co-ordination between this two providers to share resources mutually, making client fully unaware of whats going on in the background....? Please reply.. I am sorry if I'm doing mistake anywhere... Thanks in advance :) Regards, www.TechProceed.com

    Read the article

  • VPN service into 192 network

    - by tophersmith116
    I'm thinking about setting up a security testing lab. I work on a switched network, and that just makes for unnecessary headaches when doing testing. I'd like to create a 192 network with a few machines inside for DBs and AppServers etc. I will need a pivot machine that connects to both the outer network and the 192 (for automation purposes). But I'd like to be able to connect into the 192 network with my own machine from the outer network as the "attacking" machine (rather than have dedicated attack machines inside the 192 network). Therefore, I'd like to have the pivot server be a VPN server as well, so that my machine can VPN into the 192 network from the outer network. First off, is this even possible? Can I have a single computer with two NICs where a VPN service allows remote connections into the 192? Secondly, I'd like to have multiple outer clients connect to the VPN. Does anyone have any suggestions? I've used Hamachi well before, but I've also seen some good stuff from OpenVPN.

    Read the article

  • Outdoor WiFi Mesh Topology vs. Repeaters

    - by IronJaxor
    Here's the current configuration in our organization (which I believe is incorrect): We have a number of Cisco 1500 series AP's (22 in total), that are mounted outdoors to provide seamless WiFi coverage over a large area. Each AP however has its own physical ethernet connection back to the WLC (All the AP's are marked as Root AP's). They are all broadcasting the same SSID. We have tried to stagger the channel selection but because there are only three non-overlapping channels to choose from, and in some areas the density of AP's is quite high, there is multiple places of channel interference. With this configuration we experience 100-150 disconnects from clients every day. (Our clients are mobile so they move throughout the coverage area constantly). My idea is to switch the AP's to the same channel thereby forming a wireless mesh, use the built in functionality of the 1500 series to use 802.11a as the backhaul, designate one or two AP's as root AP's and wire them back to the WLC. Thereby forming a WiFi mesh, which if I'm not mistaken is the point of the 1500 series in the first place! I am however completely new at WiFi networks and wondering if I am simply mistaken in what I believe my proposed changes will enable, or if there is a better way to tackle the WiFi topology.

    Read the article

  • file system that allow to specify different RAID level per directory and change it afterward

    - by Adam Ryczkowski
    I have 5 hard drives, where I want to keep my data. Some of my files are more important, and some of them are less. So some of them I wish to put on RAID-6, and for some it RAID-5 is sufficient. It is difficult to predict at the moment of creation of the arrays how much space of each type to declare. What I would do if I didn't hear about zfs, is partition the hard drives into identical 100GB partitions, and as my needs grow, assemble those partitions into md devices using linux-raid. Then, I'd combine those devices using lvm into logical volumes where I'd put my data. So when I'd need more space of e.g. RAID-6, I'd take 100GB partition from each hard drive and assemble them into another RAID-6 md device and would use it as physical storage for the logical volume group dedicated for RAID-6 data. Then I could grow the file system on this logical volume. On top of RAID-6 and RAID-5 Volume Groups (managed by lvm) would reside completely independent file systems, which I'd later merge with multiple mount --bind into a single directory structure that would reflect the logical structure of data rather that of the storage. But now, when I heard about the ZFS with all the performance, data-healing and compression capabilities I cannot stop thinking if it can help me. If so, what do you think would be the best setup?

    Read the article

  • Azure Virtual Machines - what fault tolerance do they provide?

    - by Borek
    We are thinking about moving our virtual machines (Hyper-V VHDs) to Windows Azure but I haven't found much about what kind of fault tolerance that infrastructure provides. When I run VHD in Azure, I've got two questions: Is my VHD and all the data in it safe? I think that uploaded VHDs use the "Storage" infrastructure so they should be automatically replicated to multiple disks and geographically distributed but should I still make a full-image backup just to be safe? (Note that of course I will be backing up the actual data inside VMs that I care about; I just want to know if there is a chance greater than 0.0000001% that one day I will receive an email from Microsoft telling me that my VM is gone and that I should create or restore it from scratch). Do I need to worry about other things regarding the availability of my VMs? I mean, when I have an on-premise server I need to worry about the hardware itself, about the host operating system, what would happen if my router failed, if my Hyper-V's C: drive failed etc. Am I right in thinking that with Azure, their infrastructure takes care of all of this? Thanks.

    Read the article

  • Recover badly recorded DVDs

    - by CesarGon
    A few years ago (2003-2005) I bought a Sony USB external DVD recorder for my Dell laptop and I used it to burn a lot of discs. Much later, when I tried to use one of these discs, I realised that I could not read it. The disc behaved as if it was scratched or dirty. I tried on a couple of different DVD drives but got the same effect. Sadly, all the discs that I burnt with that recorder suffer from the same problem. Edit. When I read one of these discs with ImgBurn, I get lots of unrecovered read errors in multiple sectors, even at 1x speed. The sectors that cause read errors seem to be quite random; it's not always the same one. I have no idea what could be wrong with the discs. I doubt that they are scratched or dirty; it would be too much of a coincidence that all the discs that I burnt with that recorder got damaged at the same time. Also, they don't show any physical defects. Is there any way to diagnose what the problem is and, hopefully, recover the contents of the discs? Many thanks.

    Read the article

  • How can I make the Firefox Password Manager more intelligent?

    - by Philip
    I have two major gripes about the FF password manager: If I restore a session with multiple tabs with sites with saved passwords, the master password prompt pops up once for each of them, even if I correctly enter the password the first time. Sometimes I want Firefox not to use my saved passwords at all (e.g. because I want to let someone else use it without getting access to my accounts), but hitting cancel results in erratic behavior--sometimes the box just pops up again and again, or sometimes it stops and behaves as I wish (continuing to browse w/o my passwords) until it encounters another site that wants my password. Thus even when hitting cancel does leave me free to browse passwordless, it doesn't get Firefox to leave me alone for the whole session. Thus: do you know of any tweak or add-on that could (1) make Firefox smart enough to get my master password once and then leave me alone, and/or (2) add an option (checkbox-style, toggle button, etc.) to browse "for now" (until I toggle the option) or even "for this session" (until I restart) without using any of my saved passwords? I'm running Firefox 3.5.6 on Mac OS X 10.5; thanks.

    Read the article

< Previous Page | 765 766 767 768 769 770 771 772 773 774 775 776  | Next Page >