Search Results

Search found 18475 results on 739 pages for 'non exist'.

Page 559/739 | < Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >

  • What are possible results/side effects if replication between DC's in a Windows domain is unable to occur?

    - by hydroparadise
    There's plenty of administration literature out there how to properly manage Windows servers. But in dealing with real life, things don't always occur like you want them to. In Microsoft's Windows Server 2003 Administrator's Companion, out of 1400+ pages, theres only one page that I could find when it comes up setting up additional domain controlers. They make it sound seemless and don't reveal a whole lot on what happens if "peer" DC's are unable to replicate. Down to the specific issue at hand, we had a DC go down about a month ago due to a bad RAID controller. There was nothing critical that waranted imediate attention, so bringing it back up got put on the back burner. A month later, we get the DC back up and running and everyting seemed ok. The next day, nobody is able to logon complaining that the "user does not exist" or "unable to establish a trust relationship". Knowing that I had just put the downed DC back on the network, I immediately took it back off the network and had everybody restart the workstations. After that, exchange was fine, shares became available, and everybody was able to log in. After doing some event log swimming, it would appear that everything started due to replication issues on the SYSVOL. I've read where you can force replication, but that would mean putting it back on the network. I am afraid to put the DC back on the network in fear that something else could go wrong. So, what other issues could one expect to run into where two DC's are unreplicated for over a month?

    Read the article

  • Scaling a node.js application, nginx as a base server, but varnish or redis for caching?

    - by AntelopeSalad
    I'm not close to being well versed in using nginx or varnish but this is my setup at the moment. I have a node.js server running which is serving either json, html templates, or socket.io events. Then I have nginx running in front of node which is serving all static content (css, js, etc.). At this point I would like to cache both static content and dynamic content to memory. It's to my understanding that varnish can cache static content quite well and it wouldn't require touching my application code. I also think it's capable of caching dynamic content too but there cannot be any cookie headers? I do use redis at the moment for holding session data and planned to use it for other things in the future like keeping track of non-crucial but fun stats. I just have no idea how I should handle caching everything on the site. I think it comes down to these options but there might be more: Throw varnish in front of nginx and let varnish cache static pages, no app code changes. Redis would cache dynamic db calls which would require modifying my app code. Ignore using varnish completely and let redis handle caching everything, then use one of the nginx-redis modules. I'm not sure if this would require a lot of app code changes (for the static files). I'm not having any luck finding benchmarks that compare nginx+varnish vs nginx+redis and I'm too inexperienced to bench it myself (high chances of my configs being awful). I'm basically looking for the solution that would be the most efficient in terms of req/sec and scalable in the future (throw new hardware at the problem + maybe adjust some values in a config = new servers up and running semi-painlessly).

    Read the article

  • Fixing Windows install by connecting its hard drive via USB to a different laptop

    - by Jason
    I tried to upgrade a laptop to SP3, which broke it. I later found out SP3 doesn't work on that 2002 laptop. I can't uninstall SP3, or fix SP2, because the hard drive is now not detected during setup (I've read that's the problem you get). I put the hard drive in a USB drive case and plugged it into my other laptop, and I can read (& write to) the disk okay. (The hard drive won't fit in my other laptop, so I'm using USB.) I need to get that disk back to SP2, or fix whatever files got screwed up causing the disk to not be recognized. I don't want to do a re-install as there are 80GB of files on it I need, and they won't fit on the HD of my other laptop, and also because I no longer have some of the install CDs for software on it. What do I need to do to fix that drive from my other laptop? (I don't want my working laptop (XP SP3) to get screwed with by putting an SP2 disk in the CD drive, or the non-o/s data on the other hard drive screwed with.)

    Read the article

  • Fixing Windows install by connecting it's hard drive via USB to a different laptop

    - by Jason
    I tried to upgrade a laptop to SP3, which broke it. I later found out SP3 doesn't work on that 2002 laptop. I can't uninstall SP3, or fix SP2, because the hard drive is now not detected during setup (I've read that's the problem you get). I put the hard drive in a USB drive case and plugged it into my other laptop, and I can read (& write to) the disk okay. (The hard drive won't fit in my other laptop, so I'm using USB.) I need to get that disk back to SP2, or fix whatever files got screwed up causing the disk to not be recognized. I don't want to do a re-install as there are 80GB of files on it I need, and they won't fit on the HD of my other laptop, and also because I no longer have some of the install CDs for software on it. What do I need to do to fix that drive from my other laptop? (I don't want my working laptop (XP SP3) to get screwed with by putting an SP2 disk in the CD drive, or the non-o/s data on the other hard drive screwed with.)

    Read the article

  • How does KMS (Windows Server 2008 R2) differentiate clients?

    - by Joe Taylor
    I have recently installed a KMS Server in our domain and deployed 75 new Windows 7 machines using an image I made using Acronis True Image. There are 2 variations of this image rolled out currently. When I go to activate the machines it returns that the KMS count is not sufficient. On the server with a slmgr /dlv it shows: Key Management Service is enabled on this machine. Current count: 2 Listening on Port: 1688 DNS publishing enabled KMS Priority: Normal KMS cumulative requests received from clients: 366 Failed requests received: 2 Requests with License status unlicensed: 0 Requests with License status licensed: 0 Requests with License status Initial Grace period: 1 Requests with License statusLicense expired or hardware out of tolerance: 0 Requests with License status Non genuine grace period: 0 Requests with License status Notification: 363 Is it to do with the fact that I've used the same image for all the PC's? If so how do I get round this. Would changing the SID help? OK knowing I've been thick whats the best way to rectify the situation. Can I sysprep the machines to OOBE on each individual machine? Or would NewSID work?

    Read the article

  • IIS7ASP.Net 4.0 - 404 errors only for external clients

    - by dmcgiv
    recently we moved an ASP.Net 3.5 website to 4.0 (integrated mode) and when we deployed to the clients server (Windows Server 2008 Web edition) we notice that some .aspx pages are serving 404 errors. What is strange is that 1) the pages exist 2) if you browse from the server itself the page is served as normal, only external clients get the 404 3) it's the default 404 error page not the one configured in the web.config 4) it only happens for some .aspx pages, and I've not been able to establish a link between the pages that are not being served externally. We are using a URL rewriter module which I first thought may be at fault but then realised that only some of the failing pages are being rewritten. I've also tested removing the http module and the problem still persists. As everything is working as expected when logged onto the server I was thinking it my be some sort of permission issue, but why would it only affect a few pages? I turned on failed request tracking and the debug files are being generated with the expected 404 error, although at the moment I'm not sure what most of the data means so can't decipher what's going on internally. I'd really appreciate some help with this one.

    Read the article

  • solution for an offline server

    - by dashmug
    I'm trying to setup a development server at work that will ideally be able to test drive a couple of projects in PHP, Rails, or Django (not always running at the same time). I develop the apps locally on a Mac and then I'll put the projects up on this server for testing with my actual users (non-techies) before deploying to a production server. My problem is that we have a very poor internet connection (almost negligible) at work and doing the usual apt-get/yum/ports (make, clean, install) processes for setting up servers always get their packages from online repositories somewhere. I know I could probably download the source and then compile them myself but that's going to be too much of a hassle for me. I'm thinking about two solutions: Plan A: Run a server VM on my Mac and then use this VM as the source repository for the offline server. I've read about Ubuntu's apt-proxy and it seems to be good enough though I haven't tried it yet. I'm not sure if this is possible but can I simply do apt-get install nginx --downloadonly so that the package and its dependencies will be downloaded into my VM and my server can use the VM as the source repo for apt-get? Plan B: Run a server VM on my Mac (which I can setup/update easily when I'm home) and then clone the VM to the offline development server. Maybe I should simply make the server a VM host so I can simply copy the VM over. I think this is okay for the first-time setup but subsequent updates will take too long (cloning the VM image). If I was working on Windows, I imagine it'd be easier because most services have an installer file that I can download and then run at the server. If you could suggest another way, it would be much appreciated. Update: From Michael Hampton's answer, I found a possible solution which is apt-cacher. I also found this page on Ubuntu's website. I wonder if there is a better tool than this one.

    Read the article

  • Should I update the kernel on a Linux machine?

    - by Legate
    As I understand it, updating to a new kernel (with the normal linux-image... package, not by rolling my own) requires a server restart. However, one of our servers (Ubuntu 10.04) is running several extensive screen sessions. Restarting kills those which is always a major hassle to their owners (mostly because of lost session histories). What should I do? I see several possibilites: Not doing anything, that is update only non-kernel packages (perhaps use apt-pinning?) Update the kernel, but not restart. (Is that smart? I seem to remember there might be some problems with loading kernel modules.) Updating the kernel and restarting. Is there perhaps some way to preserve the screen sessions? I guess it ultimately boils down to this question: How important is it to update the kernel? I posted this question here instead of askubuntu.com as I think this is not an Ubuntu-specific issue though this server is running Ubuntu.

    Read the article

  • How to search file for matching whole lines?

    - by WilliamKF
    I have a command which sends to stdout a series of numbers, each on a new line. I need to determine whether a particular number exists in the list. The match needs to be exact, not a subset. For example, a simple way to approach this that does not work would be to do: /run/command/outputing/numbers | grep -c <numberToSearch> My this gives a false positive on the following list when searching for '456': 1234567 98765 23 1771 If the count is non-zero, a match was found or if it was zero, the number is not in the list. The issue with this is that the numberToSearch could match a subsequence of numbers on a line, instead I only want hits on the whole line. I looked at the man page for grep and did not see any way to only match whole lines. Is there a way to do this, or would I be better off using awk or sed or some other tool instead? I need a binary answer as to whether the number being search for is present or not.

    Read the article

  • Is it possible to add a WiFi HotSpot to an already established LAN, keep the two separate, and not modify the primary router?

    - by user12844
    I have a set up where my Cisco ASA is sitting in one facility, providing access to the Internet for two buildings. The two buildings are geographically separated by a Wireless Bridge spanning about 10 miles. All computers and equipment inside the LAN are on the same subnet (its pretty small) and we have WiFi AP's in both locations providing Wired and Wireless access to the LAN. Given all the BYOD (Ipods, and SmartPhones etc...) coming into the office as well as Visiting reps etc... we would like to also provide a non-secure, device independent (the devices cannot see or communicate with each other), and LAN independent (the devices cannot see or use anything on the LAN) HotSpot that anyone could use for their Devices that gives them access to the Internet ONLY without needing a password. I get that this could be possible at the facility with my Cisco if I messed with it and created VLANs etc... but then I would need to get it across my Bridge as well and don't think that would be possible without serious reconfiguration of everything. Would really like some kind of magic drop in solution that can kind of piggy back on my LAN without really needing to do very many if any changes to the current set up.

    Read the article

  • How do I serve Ruby on Rails applications on Windows Server 2008?

    - by Adam Lassek
    I have spent the last several hours attempting to get Ruby on Rails running on a Windows server with no luck. At first I tried configuring a test application through IIS7's FastCGI support, but the documentation for this is not very good. I've been following this blog entry, and this one, and this one, and this one but everything seems to be missing major steps, or are out of date. And every article keeps linking back to this Howto from rubyonrails.org that doesn't exist. The sense that I'm getting is that even if I manage to make this work, IIS' FastCGI isn't good enough to use in a production environment anyway. So it looks like my best bet is to setup a reverse proxy in IIS that points to Apache & Mongrel/Passenger using ARR and UrlRewrite. Is there anybody else out there stuck deploying a Rails application on a Windows stack? Am I on the right track? Can you give me a better idea of how to configure this? I believe Plesk already installed an instance of Apache/Tomcat running on this server using a different port, so adding another virtual host shouldn't be difficult; the hardest part seems to be setting up the reverse proxy through IIS.

    Read the article

  • OpenOffice Vs Microsoft Office 2007/2010

    - by Moody Tech
    I have been asked to summarise the pros and cons in connection with the choices between Microsoft Office Vs OpenOffice. I have a broad idea of what needs to be said. However I would like to open a discussion here and have a single place to go to when the time comes to give the summary to management. There are obvious points of contention: For me the lack of compliance with Group Policy is a major concern [Default save location/visibility of C:/Visibility of files and folders on the HDD] However I am sure that functionality and compatibility will be the prime mover. We are looking at making major savings by reducing our commitment to Microsoft licensing. So what are your experiences? What happens when there are no direct equivalents? [Word has a close match in OpenOffice, but a database solution match is not as close, neither is an Outlook [connecting to Exchange Server and downloading all calendars, shared calendars, scheduled events, for Exchange will still exist after the move to OpenSource solutions] In summary then: What do you see as: The benefits of this plan? How do you see the problems being manifest? Discuss.... Many thanks.

    Read the article

  • No access to Windows 2003 admin shares

    - by ARomo
    This is the environment: Several Win 2003 SP 2 servers and several Win XP SP2 & SP3 clients. All in the same LAN. Firewall is disabled everywhere. No recent Windows updates or configuration changes. This is the problem: Since last Thursday, I log on to any other server or workstation as any regular (non-admin) user and I fail to be able to open ADMIN SHARES ONLY (namely \\server1\c$, \\server1\e$ and \\server1\admin$). The error message is: "\server1\c$ is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions. Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed. Disconnect all previous connections to the server or shared resource and try again." I can, however, open the same shares if I use FQDN or IP address: \\server1.domain.local\c$ \\172.0.0.1\c$ Other shares do not have this issue and I can open them without any issue. Any ideas or suggestion would be truly appreciated. Thank you in advance.

    Read the article

  • LMDE detects a wireless card, but can't use it

    - by Davidos
    LMDE can see my wireless card, and correctly identifies it, but it refuses to let me turn it on through the shiny graphical interface. Is there a way to use the non-shiny terminal to turn on my wireless? One small tidbit I noticed was the stated version; it says the version is 00. I believe that's hexadecimal for 0, which may indicate something screwy with the software. It would be nice if someone could tell me how to figure out how to solve this kind of problem in the future. *-network DISABLED description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:09:00.0 logical name: wlan0 version: 00 serial: 91:e3:7b:0d:a3:a9 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn driverversion=3.0.0-1-amd64 firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:43 memory:e1d00000-e1d01fff I've tested multiple other network managers, and none of them work. W ireless switch is on, I'm sure it's a driver problem.

    Read the article

  • Preventing back connect in Cpanel servers

    - by Fernando
    We run a Cpanel server and someone gained access to almost all accounts using the following steps: 1) Gained access to an user account due to weak password. Note: this user didn't had shell access. 2) With this user account, he accessed Cpanel and added a cron task. The cron task was a perl script that connected to his IP and he was able to send back shell commands. 3) Having a non jailed shell, he was able to change content of most websites in server specially for users who set their folders to 777 ( Unfortunately a common recommendation and sometimes a requirement for some PHP softwares ). Is there a way to prevent this? We started by disabling cron in Cpanel interface, but this is not enough. I see a lot of other options in which an user could run this perl script. We have a firewall running and blocking uncommon outgoing ports. But he used port 80 and, well, I can't block this port as a lot of processes use them to access things, even Cpanel itself.

    Read the article

  • SQL Server: Network pauses after installing cheap SATA card: Is there a solution?

    - by samsmith
    At the risk of being assigned to the "bad DBA" club... I did something desperate, and may have to undo it. Problem: After installing a low cost eSATA board, my SQL Server is intermittently unresponsive (seemingly when there is a lot of IO to the eSATA drive). Questions: 1) Is there a solution to the intermittent unresponsiveness that allows me to keep the eSATA in place? 2) Whether or not (1==true): What is a decent, low cost way to add 1-3 TB storage to SQL for non-critical SQL DBs? Detail: Our SAN is full, and expanding it is costly and will take a month. I have a pressing need to add 1-3 TB for some development DBs (e.g. not mission critical; data loss is OK). As a bandaid, I threw a $20 eSATA PCI board in the Dell 1950 server, and attached an external 2TB eSATA drive. This seemed to work fine, but I notice that our production SQL DBs, and even remote desktop, now experience network "pauses" that they never did before (with both SQL client apps and remote desktop throwing "networking problem" errors). This SQL Server has lots of memory, and runs an instance of SQL 2005 (where all line of business apps reside) and an instance SQL 2008 (for development db's). SQL Server RAM has been appropriately configured, and this setup has run great for years. The server is: Dell 1950 Win2003 x64 14GB RAM PERC controller, 2 mirrored hd's internal Dell SAN over gbit ethernet, dual homed 2 PCIx slots (1 used by NIC for SAN, 1 now in use for eSATA board) Thank you for suggestions!

    Read the article

  • Apache LocationMatch does not work for group

    - by dma_k
    I would like to configure Apache to proxy mldonkey running at localhost. Initially I have used the following configuration: <IfModule mod_proxy.c> <LocationMatch /(mldonkey|bittorrent)/> ProxyPass http://localhost:4080/ ProxyPassReverse http://localhost:4080/ </LocationMatch> </IfModule> and it didn't worked! error.log reads [error] [client 192.168.1.1] File does not exist: /var/www/mldonkey which means that Apache does not intersect the URL. However, when I change the regexp to following: <LocationMatch /mldonkey/> it started to work (i.e. mod_proxy functions OK, more over all ). I have tried the following alternatives: <LocationMatch ^/(mldonkey|bittorrent)/> <LocationMatch ^/(mldonkey|bittorrent)/.*> <LocationMatch ^/(mldonkey|bittorrent)> <LocationMatch /(mldonkey|bittorrent)> <LocationMatch "^/(mldonkey|bittorrent)/"> <LocationMatch "/(mldonkey|bittorrent)"> <LocationMatch "/(mldonkey)"> <LocationMatch "/(mldonkey)/"> with no positive result. I am stuck. Please give me a hint where to look at. P.S. Apache Server 2.2.19. P.P.S. Would be happy if <LocationMatch> would work, without using the heavy artillery of mod_rewrite.

    Read the article

  • AMD processors witn graphics card bundled [closed]

    - by shybovycha
    Sorry for posting this question here - just don't know to which StackExchange website i should be writing. I've heard AMD created processors with video card bundled. So now these processors should work as fast as just usual processors with discrete video card but AMD's ones should use less power and spread less warm. Some googling around gave me the result like "AMD processors of A-series". They were mentioned to be build using that technology i described above. But on the other hand, we have a small rate of publishing and not-very-good quality of AMD drivers. I am a game-developer and web-developer so i need a powerful processor and graphics card and a lot of RAM on board (to make it possible to create a sample Grails application, for example or to create some 3D models in Maya/Cinema4D, for instance). Still i want my battery to be a long-living one, so power usage is a bit critical for me. So, my questions are: are there any processor building technology like i've described and which series they are (if they exist)? which processor shall fit the laptop the best: AMD one or i5/i7 on with nVidia graphics card for the purposes mentioned above?

    Read the article

  • Allow connections to only a specific URL via HTTPS with iptables, -m recent (potentially) and -m string (definitely)

    - by The Consumer
    Hello, Let's say that, for example, I want to allow connections only to subdomain.mydomain.com; I have it partially working, but it sometimes gets in a freaky loop with the client key exchange once the Client Hello is allowed. Ah, to make it even more annoying, it's a self-signed certificate, and the page requires authentication, and HTTPS is listening on a non-standard port... So the TCP/SSL Handshake experience will differ greatly for many users. Is -m recent the right route? Is there a more graceful method to allow the complete TCP stream once the string is seen? Here's what I have so far: #iptables -N SSL #iptables -A INPUT -i eth0 -p tcp -j SSL #iptables -A SSL -m recent --set -p tcp --syn --dport 400 #iptables -A SSL -m recent --update -p tcp --tcp-flags PSH,SYN,ACK SYN,ACK --sport 400 #iptables -A SSL -m recent --update -p tcp --tcp-flags PSH,SYN,ACK ACK --dport 400 #iptables -A SSL -m recent --remove -p tcp --tcp-flags PSH,ACK PSH,ACK --dport 400 -m string --algo kmp --string "subdomain.mydomain.com" -j ACCEPT Yes, I have tried to get around this with nginx tweaks, but I can't get nginx to return a 444 or abrupt disconnect before the client hello, if you can think of a way to achieve this instead, I'm all ears, err, eyes. (As suggested by a user, bringing this inquiry over from http://stackoverflow.com/questions/4628157/allow-connections-to-only-a-specific-url-via-https-with-iptables-m-recent-pote)

    Read the article

  • Referer is passed from HTTPS to HTTP in some cases... How?

    - by ravisorg
    In theory browsers do not pass on referer information from HTTPS to HTTP sites. And in my experience this has always been true. But I just found an exception, and I want to understand why it works so I can use it as well. Search for "what is my referer" on https://www.google.ca/ eg: https://www.google.ca/search?q=what+is+my+referer There are a few sites that will show referer. They all seem to "work" when they shouldn't. For example, click the www.whatismyreferer.com one. I get: Your referer: https://www.google.ca/ Note that sometimes, rarely, I get "no referer" as the result. Go back and click the link again and it'll "work" the next time. This should not happen. www.whatismyreferer.com is a non-HTTPS site. The referer header should not be being passed, but it is. What's going on here, and how can I do the same from my HTTPS site to the HTTP sites I'm linking to?

    Read the article

  • Always maximize almost maximized window utility

    - by AbstractDissonance
    It drives me nuts when a window is nearly maximized but not quite and when I go to close/move/scroll it I end up bringing up the window behind it. This happens all the time with acrobat. It seems that it's default non-minimized state is usually an "almost" maximized windowed. Is there any utility that will maximize a window when it is open or size changed when it is near the size of the monitor. I'll never have any need to have a window 99% the size of the window with it centered. Why? Because it behaves just as a maximized window by obscuring all(well, almost all) the windows behind it. (It should be only the main window of an application that is affected) The issue seems to be a few programs like acrobat that, I guess, try to fit it's contents which, sometimes, happens to be almost the full screen but not quite making it look maximized when it is not. I do not want all windows to open maximized but have no need for the window state almost centered and almost the same size as the desktop. (I imagine most people don't)

    Read the article

  • SQL Server 2012 memory usage steadily growing

    - by pgmo
    I am very worried about the SQL Server 2012 Express instance on which my database is running: the SQL Server process memory usage is growing steadily (1.5GB after only 2 days working). The database is made of seven tables, each having a bigint primary key (Identity) and at least one non-unique index with some included columns to serve the majority of incoming queries. An external application is calling via Microsoft OLE DB some stored procedures, each of which do some calculations using intermediate temporary tables and/or table variables and finally do an upsert (UPDATE....IF @@ROWCOUNT=0 INSERT.....) - I never DROP those temporary tables explicitly: the frequency of those calls is about 100 calls every 5 seconds (I saw that the DLL used by the external application open a connection to SQL Server, do the call and then close the connection for each and every call). The database files are organized in only one filgegroup, recovery type is set to simple. Some questions to diagnose the problem: is that steadily growing memory normal? did I do any mistake in database design which probably lead to this behaviour? (no explicit temp-table drop, filegroup organization, etc) can SQL Server manage such a stored procedure call rate (100 calls every 5 seconds, i.e. 100 upsert every 5 seconds, beyond intermediate calculations)? do the continuous "open connection/do sp call/close connection" pattern disturb SQL Server? is it possible to diagnose what is causing such a memory usage? Perhaps queues of wating requests? (I ran sp_who2, but I didn't see a big amount of orphan connections from the external application) if I restrict the amount of memory which SQL Server is allowed to use, may I sooner or later get into trouble?

    Read the article

  • 3 Monitor PCI-e Graphics card (without tremendous pain)?

    - by N Rahl
    As we are all painfully aware, the only way to get multiple monitors AND compositing (Compiz) on Linux is to use a single graphics card that can drive both (or in my case all three) screens. I bought a Radeon 5750 specifically because it claims to able to drive 3 monitors. I can plug in 3 monitors (2 DVI, 1 HDMI) and the Catalyst Control Center shows all 3, but only 2 can be enabled at a time. The exact message is: The current settings cannot be applied. Possible issues may include: - Display(s) cannot be enabled. - Setting(s) cannot be applied due to insufficient video memory. So I'm going to assume that either the 5750 doesn't support 3 monitors, OR, more likely, ATI couldn't be bothered to add that support to their Linux drivers. So this is a multipart question: First, can anyone suggest a PCI Express Graphics card that can run 3 screens on linux without tremendous pain? I'm looking for something where you install the driver and all three screens "just work". Does such a card exist? Second, if you have a 5750, have you been able to get it to do 3 monitors? I'm running Ubuntu 10.04 at the moment.

    Read the article

  • I have just created a subnet for a local network, connecting to a standalone server on another network, now I cannot connect to the internet

    - by Seth
    I am just learning some new aspects of servers and networking. We have a network of 5 subnets that all interconnect with each-other. In order to get two computers on the subnet that we were setting up, I changed the IP from the subnet where the standalone server is on (where they used to be set up)to the local subnet we are remotely hooking up. Likewise I also changed the gateway to coincide with the new subnet. Only problem is that since doing this, I am unable to establish a connection to the internet. I can ping the server and correspong gateway & DNS server, but cannot get connected to the internet. We do have a dumb-switch (non-programmable) connected that receives both the internet and private network inputs and distributes (or should do so) to about 5 other computers. Bottom line, I cannot currently connect to the internet, and am wondering what could be causing this.. It is likely something very obvious and pardon me being more vague than I probably should be, but I could use some help resolving this! Thanks for any help!

    Read the article

  • Tunnell network requests with Windows 7

    - by mark
    I've Windows 7 64bit Pro client in a private LAN behind a Netgear wgr614v7 router. I've also a remote Debian server machine outside. I'd like to tunnel all (or specified ports/protocols) over this outside server, so when I'm on the Windows machine and I request serverfault.com it would not appear from the wgr614v7 public IP but from the server. But it's not only about HTTP traffic, it's basically about everything I'd like to: other TCP ports, even UDP, etc. It must be transparent to the application, e.g. they shouldn't be aware of this. All their requests just appear as being from the server and the tunnel between them takes care about the packets. I'm aware of e.g. Putty and forwarding individual ports or using it as a socks proxy, however not many applications to support this and the support in windows itself looks non-existent to me. I might add it should be something "reasonable" easy to set up. I've heard about PPTP but I'm unsure about it's security implications (by design). Should I go for VPN? There seem to be two common solutions for Linux (OpenSwan and StrongSwan), why would I pick the one over the other? I also fear that setting up a VPN might be quite complex, OTOH maybe it's the only sane way to do the things right? Or is OpenVPN sufficient? I'm seeking for open (source) solutions, what other options to I have or which direction should I head to?

    Read the article

< Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >